CN107229917B - A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster - Google Patents
A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster Download PDFInfo
- Publication number
- CN107229917B CN107229917B CN201710395719.3A CN201710395719A CN107229917B CN 107229917 B CN107229917 B CN 107229917B CN 201710395719 A CN201710395719 A CN 201710395719A CN 107229917 B CN107229917 B CN 107229917B
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- image
- super
- pixel
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of several remote sensing image general character well-marked target detection methods based on iteration cluster, belongs to remote sensing image process field.Implementation process includes: the gray level co-occurrence matrixes for 1) calculating several remote sensing images, obtains four parameters of contrast, energy, entropy, correlation of gray level co-occurrence matrixes, in conjunction with the length and width of remote sensing image, calculates super-pixel number;2) super-pixel segmentation is completed to remote sensing image according to super-pixel number and K-means cluster is carried out to segmentation result, calculated conspicuousness between class, obtain the initial notable figure of image;3) Target Segmentation is carried out to all initial notable figures, segmentation result is carried out to the K-means based on super-pixel again and clusters and calculate conspicuousness between class, obtains the final notable figure of image;4) the general character well-marked target of several remote sensing images is obtained using Threshold segmentation.The present invention can accurately detect the general character well-marked target of several remote sensing images while effectively inhibiting background interference, can be used for the multiple fields such as environmental monitoring, the reallocation of land.
Description
Technical field
The invention belongs to Remote Sensing Image Processing Technology fields, and in particular to a kind of several remote sensing images based on iteration cluster
General character well-marked target detection method.
Background technique
In recent years, satellite technology, remote sensing technology etc. continue to develop, and the mankind have been realized in comprehensive, round-the-clock, multi-angle
Earth observation.With the rapid development of High Resolution Remote Sensing Satellites, the quantity of remote sensing image also constantly increases.Remote sensing image mesh
Mark detection facilitates the computing resource of reasonable distribution subsequent processing, reduces the complexity of subsequent processing.Thus become remote sensing image
Primary study problem in processing technique.
Existing remote sensing image object detection method can be divided into top-down and bottom-up two major classes.First is that top-down
Method.Such methods carry out machine learning to features such as the color of known target object, texture, brightness first, then basis
The feature learnt carries out target detection.Top-down method is needed using a large amount of priori knowledges, thus computation complexity compared with
Height, it is poor for different target adaptability.Second is that bottom-up method.Such method vision significance based on image point
Analysis, can effectively improve target detection efficiency.Significance analysis be by human visual system conspicuousness attention mechanism inspire and
Come, existing significance analysis method can be divided into the method based on biological model, the method based on computation model, and be based on
The method three classes of mixed model.ITTI method (ITTI) is the most classical algorithm based on biological model and many subsequent
The basis of significance analysis method.This method is wild by calculating linearity center-periphery differential mode apery class visual experience, carries out
Multiple dimensioned color, brightness and Directional feature extraction then obtain the characteristic remarkable picture of single scale by multi-scale feature fusion,
Characteristic point selection is carried out finally by neural network.In method based on computation model, method based on frequency tuning (FT:
Frequency Tuned) first image progress difference of Gaussian is filtered to obtain image low-frequency information, it is then low by calculating image
Frequency information and original image aberration value obtain final notable figure.The marking area that FT method obtains has good boundary.Based on mixing
In the method for model, the method (GBVS:Graph Based Visual Saliency) based on graph theory passes through the prospect to image
Similarity measurement is carried out with background element, and according to each element and preset seed or the Similarity measures of sequence its conspicuousnesses.
Significance analysis method based on single image takes in the target detection of Images of Natural Scenery and remote sensing image
Obtained preferable effect.Since the significance analysis method based on single image cannot efficiently use the common information between image,
Thus notable figure obtained only indicates the higher region of saliency value in single image.But for some images, saliency value
The not necessarily required target area in higher region.It is single for the more complicated remote sensing image of characters of ground object
It is likely to the background area occurred with target area with similar features in width image or has compared with target area higher
The background area of saliency value.And the significance analysis method based on single image can not be to similar or higher saliency value
Background area is effectively inhibited.
An important feature of the invention is: it is aobvious can to complete general character to several remote sensing images with similar characters of ground object
Write accurate, the efficient detection of target.In several remote sensing images with similar characters of ground object, when most of remote sensing images all have
When having the higher same class target area of vision significance, this kind of targets are thus referred to as general character well-marked target.General character is significant
Object detection method introduces remote sensing image process field, using notable feature common to several images, provides mutually with reference to letter
Breath, can effectively inhibit the higher background interference of conspicuousness in these images, to accurately and efficiently detect several remote sensing shadows
The general character well-marked target of picture.
The present invention has obtained project of national nature science fund project: " remote sensing image based on joint significance analysis is interested
Extracted region key technology research " (number: subsidy energetically 61571050).
Summary of the invention
The problem of for the above technology, the present invention provides a kind of several remote sensing images based on iteration cluster are total
Property well-marked target detection method.This method calculates the gray level co-occurrence matrixes of several remote sensing images first, obtains gray level co-occurrence matrixes
Contrast, energy, four entropy, correlation parameters calculate super-pixel number in conjunction with the length and width of remote sensing image;Then
Super-pixel segmentation is completed to remote sensing image according to super-pixel number and K-means cluster is carried out to segmentation result, is shown between calculating class
Work property, obtains the initial notable figure of image;Secondly Target Segmentation is carried out to all initial notable figures, segmentation result is carried out again
K-means based on super-pixel clusters and calculates conspicuousness between class, obtains the final notable figure of image;Finally utilize Threshold segmentation
Obtain the general character well-marked target of several remote sensing images.The method of the present invention can be extracted accurately while effectively inhibiting background interference
The general character well-marked target of several remote sensing images, can be used for the multiple fields such as environmental monitoring, the reallocation of land.Present invention is primarily concerned with two
A aspect:
1) the general character well-marked target in several remote sensing images is accurately extracted, remote sensing image target detection precision is promoted
2) effectively inhibit the higher background information of saliency value in image
The technical solution used in the present invention are as follows: it is total that gray scale is calculated separately to every width image in several remote sensing images first
Raw matrix, according to the contrast of gray level co-occurrence matrixes, energy, entropy, four parameters of correlation and the length and width that combine image,
Super-pixel number needed for calculating every width remote sensing image;Secondly, according to obtained super-pixel number in several remote sensing images
Every width image carry out super-pixel segmentation, and to super-pixel segmentation result carry out K-means cluster, obtain different terrestrial object information institutes
Corresponding class calculates conspicuousness between class, obtains the initial notable figure of each width image in several remote sensing images.Again, to all
Initial notable figure carries out Target Segmentation, and object segmentation result is carried out to the K-means cluster based on super-pixel again, calculates
Conspicuousness between class obtains the final notable figure of several remote sensing images, finally completes several remote sensing image general character using Threshold segmentation
The automatic detection of well-marked target.Specifically include following steps:
Step 1: gray level co-occurrence matrixes are calculated to every width image in several remote sensing images, then utilize gray scale symbiosis square
The contrast of battle array, energy, four entropy, correlation parameters in combination with the length and width of image calculate every width remote sensing image institute
The super-pixel number K needed;
Step 2: the super-pixel number obtained according to step 1 carries out super-pixel to every width image in several remote sensing images
Segmentation, several remote sensing images after obtaining super-pixel segmentation;
Step 3: calculating the color average of each super-pixel in every width remote sensing image after super-pixel segmentation, as
The color mean value of the super-pixel, the color mean value based on super-pixel carry out K- to all remote sensing images after super-pixel segmentation
Means cluster;
Step 4: the color histogram of every one kind is counted using K-means cluster result, then according to color histogram meter
Color distance between calculation class finally obtains several remote sensing based on conspicuousness between color distance between class and spatial weighting information calculating class
The initial notable figure of every width image in image;
Step 5: carrying out Threshold segmentation using maximum variance between clusters to the initial notable figure of every width remote sensing image, thus
These initial notable figures are divided into target area and two class of background area, finally obtain every width image in several remote sensing images
Initial target divides image;
Step 6: super-pixel number K is halved, and then carries out super picture to the initial target segmentation image of every width remote sensing image
Element segmentation again clusters all initial targets segmentation image after super-pixel segmentation using K-means algorithm, and statistics is poly-
The color histogram of every one kind in class result is again based on face between class then according to color distance between color histogram calculating class
Conspicuousness between color distance and spatial weighting information calculating class, obtains the final notable figure of every width image in several remote sensing images;
Step 7: Threshold segmentation is carried out using maximum between-cluster variance method to the final notable figure of every width image, to mention
Take out the general character well-marked target of several remote sensing images.
The method of the present invention is that basic unit carries out the detection of general character well-marked target with super-pixel, guarantees that region is complete to the maximum extent
Whole property, avoids target detection fragmentation;The smaller super-pixel of simultaneous selection carries out the cluster of the iteration based on super-pixel, further presses down
Target periphery processed has the background area of similar features.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Fig. 2 is the width exemplary image in several remote sensing images used herein.
Fig. 3 is the final notable figure of exemplary image of the present invention and object detection results, and (a) is the final notable figure of exemplary image,
It (b) is exemplary image object detection results.
Fig. 4 is the method for the present invention compared with FT method, ITTI method, the final notable figure result of GBVS method exemplary image,
(a) it is FT method notable figure, (b) is ITTI method notable figure, (c) is GBVS method notable figure, it is (d) significant for the method for the present invention
Figure.Fig. 5 is the method for the present invention compared with FT method, ITTI method, GBVS method exemplary image final target detection result, (a)
(b) it is ITTI method object detection results for FT method object detection results, (c) is GBVS method object detection results, (d)
For the method for the present invention object detection results.
Fig. 6 is that the ground truth (Ground-Truth) of exemplary image identifies figure.
Fig. 7 be the method for the present invention and FT method, ITTI method, GBVS method Receiver Operating Characteristics ROC (ROC:
Receiver Operating Characteristic) curve graph.
Specific embodiment
The present invention is described in further details with reference to the accompanying drawing.Overall framework of the invention is as shown in Figure 1, existing introduction
Each step realizes details.
Step 1: calculating gray level co-occurrence matrixes GLCM to every width image in several remote sensing images, then total using gray scale
Tetra- parameter values of contrast C on, energy Asm, entropy Ent, correlation Corr of raw matrix, length M and width in combination with image
N is spent, the super-pixel number K of remote sensing image is calculated;Detailed process is as follows:
The tonal range of remote sensing image P be [0, G-] 1, P (i, j) be in remote sensing image P coordinate be (i, j) i ∈ 1 ...,
M }, the gray value of the pixel of j ∈ { 1 ..., N }.Gray value is the pixel of x, statistics and its distance d=1 gray scale from image
The frequency that the pixel (i+a, j+b) that value is y occurs, is denoted as gray level co-occurrence matrixes GLCM (x, y), wherein a2+b2=d2.Gray scale model
It encloses for the remote sensing image of [0, G-1], gray level co-occurrence matrixes GLCM (x, y) is the matrix of G × G, and GLCM (x, y) calculation formula is such as
Under:
GLCM (x, y)=(i, j), (i+a, j+b) ∈ M × N | P (i, j)=x, P (i+a, j+b)=y }
x∈{0,…,G-1},y∈{0,…,G-1}
Tetra- parameter value calculation public affairs of contrast C on, energy Asm, entropy Ent, correlation Corr of gray level co-occurrence matrixes GLCM
Formula is as follows:
Wherein μxAnd σxRespectively the mean value of image grayscale distribution and standard deviation and there is μx=μy, σx=σy。
Using tetra- contrast C on, energy Asm, entropy Ent, correlation Corr parameter values, textural characteristics weight is calculated
w。
Then remote sensing image length M, width N and textural characteristics weight w are utilized, super-pixel number K is calculated.
Step 2: the super-pixel number obtained according to step 1 surpasses every width remote sensing image in several remote sensing images
Pixel segmentation, has been used in the present invention SLIC (SLIC:Simple Linear Iterative Clustering) super-pixel
Dividing method, to affiliated super-pixel SP (i, the j)=SLIC of each element marking of remote sensing imageK(P (i, j)), K indicate super-pixel
Number, several remote sensing images after obtaining super-pixel segmentation;
SLIC superpixel segmentation method K initial seed point of uniform design in the picture first, each super-pixel is with these
Centered on seed point, initial size is M × N/K, then for other pixels in image, calculate its with K seed point away from
From, and assign it to the super-pixel belonging to the nearest seed point, final updating seed point location.It repeats the above process,
Until new seed point is less than the threshold value of setting at a distance from former seed point, algorithmic statement obtains super-pixel segmentation result.
Step 3: calculating the color average of each super-pixel in every width remote sensing image after super-pixel segmentation, as
The color mean value of the super-pixel, the color mean value based on super-pixel carry out K- to all remote sensing images after super-pixel segmentation
Means cluster, obtains class corresponding to different terrestrial object informations;
K-means clustering method chooses C mass center first in data set, then for other data in data set
Point calculates it at a distance from C mass center, assigns it to the class belonging to the nearest mass center, finally to obtained C
Class recalculates mass center.It repeats the above process, until new mass center is less than the threshold value of setting at a distance from the protoplasm heart, algorithm is received
It holds back, obtains cluster result.C=3 is taken in the methods of the invention.
Step 4: the color histogram of every one kind is counted using K-means cluster result, then according to color histogram meter
Color distance between calculation class finally obtains several remote sensing based on conspicuousness between color distance between class and spatial weighting information calculating class
The initial notable figure of every width image in image;Detailed process is as follows:
The color histogram for calculating every one kind in the obtained cluster result of step 3 first, then according to color histogram
Color distance d (c between calculating classi,cj)。
Wherein L indicates different colours total number in image, fi,lIt is class ciIn l kind color occur in L kind color sum
Frequency, fj,lClass cjIn l kind color L kind color sum appearance frequency;
Wherein D (ci,cj) it is class ciWith class cjThe Euclidean distance of mass center, σ2=0.4;r(cj) it is class cjPixel quantity with
The ratio between sum of all pixels in image.Each pixel saliency value is finally obtained according to the affiliated class of pixel each in former remote sensing image, is obtained
The initial notable figure of mostly every width remote sensing image.
Step 5: Threshold segmentation is carried out using maximum variance between clusters to the initial notable figure of every width remote sensing image, is obtained
The optimal segmenting threshold of every initial notable figure of width, so that these initial notable figures are divided into target area and two class of background area,
It is indicated with bianry image Bw (i, j).The bianry image of generation is multiplied with former remote sensing image, finally obtains every in several remote sensing images
The initial target of width image divides image ROI (i, j).
Step 6: super-pixel number K is halved, and then carries out super picture to the initial target segmentation image of every width remote sensing image
Element segmentation again clusters all initial targets segmentation image after super-pixel segmentation using K-means algorithm, and statistics is poly-
The color histogram of every one kind in class result is again based on face between class then according to color distance between color histogram calculating class
Conspicuousness between color distance and spatial weighting information calculating class, obtains the final notable figure of every width image in several remote sensing images;
Step 7: Threshold segmentation is carried out using maximum between-cluster variance method to the final notable figure of every width remote sensing image, is obtained
To the optimal segmenting threshold of every final notable figure of width remote sensing image, so that these final notable figures are divided into target area and background
Two class of region, is indicated with bianry image.The bianry image of generation is multiplied with former remote sensing image, obtains the general character of several remote sensing images
Well-marked target.
Effect of the invention can be further illustrated by following experimental result and analysis:
1. experimental data
Testing data used is the Beijing Suburb remote sensing image from SPOT5 satellite, shear several 512 from image ×
The image of 512 sizes is as shown in Figure 2 to the used experimental data example of the present invention as experimental data:
2. comparative experiments and experimental evaluation index
The method of the present invention is as shown in Figure 3 to the final notable figure result and object detection results of exemplary image.Side of the present invention
Method compared traditional FT method, ITTI method and GBVS method.It compared the significant of distinct methods generation respectively from subjective
Figure and object detection results, it is as shown in Figure 4 and Figure 5 respectively.In Fig. 4, (a) is the notable figure that FT method generates, and (b) is the side ITTI
The notable figure that method generates, (c) notable figure generated for GBVS method, (d) notable figure generated for the method for the present invention.In Fig. 5,
(a) it is FT method object detection results, (b) is ITTI method object detection results, (c) is GBVS method object detection results,
It (d) is the method for the present invention object detection results.
Invention also uses ROC (ROC:Receiver Operating Characteristic) curve (also known as subject's work
Make indicatrix) objectively evaluate above-mentioned object detection method.ROC curve is the two dimension for showing two-value classifier effect
Plane curve, abscissa are false positive rate (False Positive Rate, FPR), and ordinate is true positive rate (True
Positive Rate, TPR).
FPR is in image be by error flag total nontarget area shared by the nontarget area of target area ratio.TPR
For the ratio in general objective region shared by the target area that is correctly marked in image.By changing the cutting threshold to notable figure,
Make its variation in tonal range [0-255], obtains a series of bianry imagesA series of FPR is calculated simultaneously
Value and TPR value, drafting obtain ROC curve.
Gt (i, the j) expression of the real goal region of image, the calculation formula of FPR and TPR are as follows:
Fig. 6 is identified ground truth (Ground-Truth).Fig. 7 is ROC curve figure.In ROC curve figure, work as FPR
When being worth identical, TPR value is higher, and the region that representation method correctly detects is more.As can be seen from the figure method performance of the invention
It is substantially better than FT method, ITTI method and GBVS method.
Claims (1)
1. a kind of several remote sensing image general character well-marked target detection methods based on iteration cluster, in the method, firstly, to more
Every width image in width remote sensing image calculates separately gray level co-occurrence matrixes, according to the contrast of gray level co-occurrence matrixes, energy, entropy,
Four parameters of correlation and the length and width for combining image, super-pixel number needed for calculating every width remote sensing image, secondly,
Super-pixel segmentation is carried out to every width image in several remote sensing images according to obtained super-pixel number, and to super-pixel segmentation knot
Fruit carries out K-means cluster, obtains class corresponding to different terrestrial object informations, calculates conspicuousness between class, obtains several remote sensing images
In the initial notable figure of each width image Target Segmentations are carried out to all initial notable figures, and again by object segmentation result again
The K-means cluster based on super-pixel is once carried out, conspicuousness between class is calculated, obtains the final notable figure of several remote sensing images,
Finally, completing the automatic detection of several remote sensing image general character well-marked targets using Threshold segmentation, which is characterized in that including following step
It is rapid:
Step 1: gray level co-occurrence matrixes are calculated to every width image in several remote sensing images, obtain the comparison of gray level co-occurrence matrixes
Con, energy Asm, tetra- entropy Ent, correlation Corr parameter values are spent, is counted using formula w=(Asm × Corr)/(Con × Ent)
Calculation obtains textural characteristics weight w, and remote sensing image length M, width N and textural characteristics weight w are substituted into formulaIt is calculated, to obtain super-pixel number K;
Step 2: the super-pixel number obtained according to step 1 carries out super-pixel point to every width image in several remote sensing images
It cuts, several remote sensing images after obtaining super-pixel segmentation;
Step 3: calculating the color average of each super-pixel in every width remote sensing image after super-pixel segmentation, super as this
The color mean value of pixel, it is poly- that the color mean value based on super-pixel carries out K-means to all remote sensing images after super-pixel segmentation
Class;
Step 4: counting the color histogram of every one kind using K-means cluster result, then calculates class according to color histogram
Between color distance, conspicuousness between class is calculated based on color distance between class and spatial weighting information, finally obtains several remote sensing images
In every width image initial notable figure;
Step 5: Threshold segmentation is carried out using maximum variance between clusters to the initial notable figure of every width remote sensing image, thus by this
Some initial notable figures are divided into target area and two class of background area, finally obtain the initial of every width image in several remote sensing images
Target Segmentation image;
Step 6: super-pixel number K is halved, and then carries out super-pixel point to the initial target segmentation image of every width remote sensing image
It cuts, all initial targets segmentation image after super-pixel segmentation is clustered using K-means algorithm again, Statistical Clustering Analysis knot
Then the color histogram of every one kind in fruit calculates color distance between class according to color histogram, be again based between class color away from
From conspicuousness between spatial weighting information calculating class, the final notable figure of every width image in several remote sensing images is obtained;
Step 7: Threshold segmentation is carried out using maximum between-cluster variance method to the final notable figure of every width image, to extract
The general character well-marked target of several remote sensing images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710395719.3A CN107229917B (en) | 2017-05-31 | 2017-05-31 | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710395719.3A CN107229917B (en) | 2017-05-31 | 2017-05-31 | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107229917A CN107229917A (en) | 2017-10-03 |
CN107229917B true CN107229917B (en) | 2019-10-15 |
Family
ID=59933930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710395719.3A Active CN107229917B (en) | 2017-05-31 | 2017-05-31 | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107229917B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108052559A (en) * | 2017-12-01 | 2018-05-18 | 国电南瑞科技股份有限公司 | Distribution terminal defect mining analysis method based on big data processing |
CN107992874B (en) * | 2017-12-20 | 2020-01-07 | 武汉大学 | Image salient target region extraction method and system based on iterative sparse representation |
CN107992875B (en) * | 2017-12-25 | 2018-10-26 | 北京航空航天大学 | A kind of well-marked target detection method based on image bandpass filtering |
CN108596832B (en) * | 2018-04-18 | 2022-07-05 | 中国计量大学 | Super-pixel parameter self-adaptive selection method of visual perception saturation strategy |
CN108871342A (en) * | 2018-07-06 | 2018-11-23 | 北京理工大学 | Subaqueous gravity aided inertial navigation based on textural characteristics is adapted to area's choosing method |
CN109086776A (en) * | 2018-07-06 | 2018-12-25 | 航天星图科技(北京)有限公司 | Typical earthquake disaster information extraction algorithm based on the detection of super-pixel region similitude |
CN110070545B (en) * | 2019-03-20 | 2023-05-26 | 重庆邮电大学 | Method for automatically extracting urban built-up area by urban texture feature density |
CN112347823B (en) * | 2019-08-09 | 2024-05-03 | 中国石油天然气股份有限公司 | Deposition phase boundary identification method and device |
CN110570352B (en) * | 2019-08-26 | 2021-11-05 | 腾讯科技(深圳)有限公司 | Image labeling method, device and system and cell labeling method |
CN110827298A (en) * | 2019-11-06 | 2020-02-21 | 齐鲁工业大学 | Method for automatically identifying retina area from eye image |
CN111553222B (en) * | 2020-04-21 | 2021-11-05 | 中国电子科技集团公司第五十四研究所 | Remote sensing ground feature classification post-processing method based on iteration superpixel segmentation |
CN111583279A (en) * | 2020-05-12 | 2020-08-25 | 重庆理工大学 | Super-pixel image segmentation method based on PCBA |
CN112017159B (en) * | 2020-07-28 | 2023-05-05 | 中国科学院西安光学精密机械研究所 | Ground target realism simulation method under remote sensing scene |
CN113658129B (en) * | 2021-08-16 | 2022-12-09 | 中国电子科技集团公司第五十四研究所 | Position extraction method combining visual saliency and line segment strength |
CN114663682B (en) * | 2022-03-18 | 2023-04-07 | 北京理工大学 | Target significance detection method for improving anti-interference performance |
CN115147733B (en) * | 2022-09-05 | 2022-11-25 | 山东东盛澜渔业有限公司 | Artificial intelligence-based marine garbage recognition and recovery method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020993A (en) * | 2012-11-28 | 2013-04-03 | 杭州电子科技大学 | Visual saliency detection method by fusing dual-channel color contrasts |
CN103208001A (en) * | 2013-02-06 | 2013-07-17 | 华南师范大学 | Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction |
CN103413120A (en) * | 2013-07-25 | 2013-11-27 | 华南农业大学 | Tracking method based on integral and partial recognition of object |
CN103955913A (en) * | 2014-02-18 | 2014-07-30 | 西安电子科技大学 | SAR image segmentation method based on line segment co-occurrence matrix characteristics and regional maps |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9013536B2 (en) * | 2013-03-13 | 2015-04-21 | Futurewei Technologies, Inc. | Augmented video calls on mobile devices |
-
2017
- 2017-05-31 CN CN201710395719.3A patent/CN107229917B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020993A (en) * | 2012-11-28 | 2013-04-03 | 杭州电子科技大学 | Visual saliency detection method by fusing dual-channel color contrasts |
CN103208001A (en) * | 2013-02-06 | 2013-07-17 | 华南师范大学 | Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction |
CN103413120A (en) * | 2013-07-25 | 2013-11-27 | 华南农业大学 | Tracking method based on integral and partial recognition of object |
CN103955913A (en) * | 2014-02-18 | 2014-07-30 | 西安电子科技大学 | SAR image segmentation method based on line segment co-occurrence matrix characteristics and regional maps |
Also Published As
Publication number | Publication date |
---|---|
CN107229917A (en) | 2017-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107229917B (en) | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster | |
CN107944370B (en) | Classification of Polarimetric SAR Image method based on DCCGAN model | |
Akhtar et al. | Automated plant disease analysis (APDA): performance comparison of machine learning techniques | |
CN105184309B (en) | Classification of Polarimetric SAR Image based on CNN and SVM | |
CN104951799B (en) | A kind of SAR remote sensing image oil spilling detection recognition method | |
CN104123555B (en) | Super-pixel polarimetric SAR land feature classification method based on sparse representation | |
CN110399909A (en) | A kind of hyperspectral image classification method based on label constraint elastic network(s) graph model | |
CN103996047B (en) | Hyperspectral image classification method based on squeezed spectra clustering ensemble | |
CN102982338B (en) | Classification of Polarimetric SAR Image method based on spectral clustering | |
CN106296695A (en) | Adaptive threshold natural target image based on significance segmentation extraction algorithm | |
CN105718942B (en) | High spectrum image imbalance classification method based on average drifting and over-sampling | |
CN105138970A (en) | Spatial information-based polarization SAR image classification method | |
CN106845497B (en) | Corn early-stage image drought identification method based on multi-feature fusion | |
CN105761238B (en) | A method of passing through gray-scale statistical data depth information extraction well-marked target | |
CN109657610A (en) | A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images | |
CN105740915B (en) | A kind of collaboration dividing method merging perception information | |
CN102496023A (en) | Region of interest extraction method of pixel level | |
CN107123150A (en) | The method of global color Contrast Detection and segmentation notable figure | |
Deng et al. | Cloud detection in satellite images based on natural scene statistics and gabor features | |
CN104408467B (en) | Classification of Polarimetric SAR Image method based on pyramid sampling and support vector machine | |
CN112949738B (en) | Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm | |
CN104408472B (en) | Classification of Polarimetric SAR Image method based on Wishart and SVM | |
CN107527023A (en) | Classification of Polarimetric SAR Image method based on super-pixel and topic model | |
CN107341813A (en) | SAR image segmentation method based on structure learning and sketch characteristic inference network | |
CN108734228A (en) | The polarimetric SAR image random forest classification method of comprehensive multiple features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |