CN111681262A - Method for detecting infrared dim target under complex background based on neighborhood gradient - Google Patents
Method for detecting infrared dim target under complex background based on neighborhood gradient Download PDFInfo
- Publication number
- CN111681262A CN111681262A CN202010384657.8A CN202010384657A CN111681262A CN 111681262 A CN111681262 A CN 111681262A CN 202010384657 A CN202010384657 A CN 202010384657A CN 111681262 A CN111681262 A CN 111681262A
- Authority
- CN
- China
- Prior art keywords
- window
- sub
- neighborhood
- gradient
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting infrared dim targets under a complex background based on neighborhood gradient, which comprises the following steps: 1) generating an initial search main window and a sub-window template; 2) obtaining a foreground judgment window pixel matrix according to the minimum value of the window; 3) constructing a foreground judgment mapping table; 4) calculating a neighborhood peak gradient feature and a neighborhood step gradient feature; 5) carrying out SVM classification weight judgment on the window characteristics; 6) performing neighborhood contrast judgment on the window; 7) the window step length is adjusted in a self-adaptive mode, and the target information is output in a single-frame image traversal mode; 8) and carrying out continuous target detection on the sequence image, respectively carrying out motion characteristic judgment and static characteristic judgment, and calculating to obtain target detection information data of the latest frame. The method can be used for scenes such as remote photoelectric reconnaissance, unmanned aerial vehicle detection, coast defense, vehicle-mounted ground object target reconnaissance and the like, and solves the problems of low detection accuracy, high false alarm rate and low detection robustness in the existing weak and small target detection technology under the complex background.
Description
Technical Field
The invention relates to the technical field of digital image processing, in particular to an infrared small and weak target detection method based on neighborhood gradient.
Background
The infrared weak and small target detection is one of research hotspots of photoelectric image processing technology and is widely applied to the field of early warning and reconnaissance. The infrared weak target has the characteristics of small imaging size, weak shape characteristics, unobvious texture characteristics, high susceptibility to noise interference and the like, and is particularly susceptible to false detection alarm and false alarm leakage in complex backgrounds such as cloud layers, ground objects, sea surfaces and the like. Therefore, in order to improve the performance of the photoelectric detection system, it is of great practical significance to research how to detect the weak and small targets under the complex background.
Yanghui et al propose a background difference detection algorithm with fusion of a space-time structure tensor in 'detection of infrared weak and small moving targets under the ground-air background', but the background difference detection algorithm is greatly influenced by illumination change, noise and the like due to image processing based on pixels, and a 'ghost' phenomenon is easily caused by initialization of a background model. Zhang Gao et al put forward a method based on filtering and global threshold segmentation to realize small target extraction in the 'infrared small and weak target detection algorithm research under cloud background', and filtering operation weakens weak target characteristics to a certain extent, thus easily causing alarm leakage under complex background. The Pimenta et al propose a target detection method using a local background in a long-distance infrared weak and small target detection method, which can better suppress target false alarms, but has low robustness for target extraction in a complex scene.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for detecting infrared dim targets under a complex background based on neighborhood gradient, aiming at solving the problems of effectiveness and robustness of infrared dim target detection under the complex background.
The invention provides a method for detecting infrared dim targets under a complex background based on neighborhood gradient, which comprises the following steps:
Step 2, calculating the sub-window template SW1Matrix SH1(i, j) obtaining a foreground judgment window pixel matrix WF (i, j) according to the minimum pixel value MIN of the (i, j);
step 3, calculating a Mean value WF _ Mean and a variance WF _ Varce of the matrix WF (i, j) and generating a foreground judgment mapping Table (i, j);
step 4, calculating neighborhood peak gradient, average gradient and neighborhood step gradient of each sub-block of the foreground judgment window pixel matrix WF (i, j) according to the foreground judgment mapping Table Table (i, j), and generating a neighborhood peak gradient eigenvector U1And neighborhood step gradient eigenvector U2;
Step 5, carrying out neighborhood peak gradient eigenvector U1And neighborhood step gradient eigenvector U2Carrying out SVM classification weight judgment to obtain a classification judgment weight phi (U)1,U2);
Step 6, calculating the equation satisfying phi (U)1,U2) Judging the neighborhood contrast ratio ConRatio of the window pixel matrix as 1, comparing and judging the neighborhood contrast ratio ConRatio with the image global contrast ratio ConRatioAll, if the ConRatio is more than or equal to the ConRatioAll, setting the current window target Flag bit as 1, otherwise, setting the Flag as 0;
step 7, adaptively adjusting the step length of the sliding window to S, calculating to obtain a sliding window template of the next round, sequentially completing the steps 2 to 6, realizing the complete traversal of the current frame image, obtaining all windows meeting the condition that the window Target Flag bit Flag is 1, and generating a candidate Target window vector Vec _ Target 1;
and 8, performing the steps 1 to 7 on the continuous N frames of images to obtain a candidate Target detection vector sequence [ Vec _ Target1, Vec _ Target2 and … Vec _ Target N ], wherein Vec _ Target N represents a candidate Target window vector of the N frame of image, respectively performing sequence motion characteristic judgment and static characteristic judgment on the candidate Target detection vector sequence, and calculating to obtain weak and small Target detection information of the current frame of image.
The step 1 comprises the following steps:
step 1-1, performing sliding window traversal on a current frame image f (i, j), and setting the size of a main window template to be M1×M1In the current frame image, the center position (M) of the main window template is used1/2,M1/2) constructing an image search main window template W1(ii) a Wherein M is1The width and height of the main window template; the main window template is a window and is determined by the central position and the size;
step 1-2, obtaining a main window template W1Image matrix H1In a matrix H1The pixel value h (i) and the pixel position l (i) of (A) construct a two-dimensional vector V (h (i), l (i)) and sort from small to large by taking the pixel value as a main weight value to generate a sorted vector V' (h (i), l (i)), wherein the value of i is 0,11×M1-1;
Step 1-3, setting the size of the sub-window template as M2×M2Wherein M is2The width and height of the window are determined for the foreground, and a sub-window template SW is generated centering on the sub-maximum pixel position V' (h (i-1), l (i-1)) of the main window template image1。
In step 2, the sub-window template SW is calculated by the following formula1Matrix SH1Minimum pixel value MIN of (i, j) and foreground determination window pixel matrix WF (i, j):
MIN=min(SH1(i,j)),
WF(i,j)=SH1(i,j)-MIN·E,
where min is the minimum of the matrix, SH1(i, j) is a sub-window template SW1The matrices, i, j, are horizontal and vertical coordinates respectively,is an identity matrix, the width and height of the matrix E are both M2。
The step 3 comprises the following steps:
step 3-1, calculating a Mean value WF _ Mean and a variance WF _ Varce of the matrix WF (i, j):
step 3-2, respectively traversing each pixel of the matrix WF (i, j), and performing the following judgment on the current pixel value WF (i, j) to obtain a foreground judgment mapping Table (i, j):
in step 4, a neighborhood peak gradient eigenvector U is calculated1The specific method comprises the following steps:
step 4-1, setting the sub-block WcThe size is T × T, and the foreground judgment window pixel matrix WF (i, j) is divided into k sub-blocks;
step 4-2, respectively calculating the sub-maximum pixel value SMaxValue in each sub-block in the pixel matrix WF (i, j) of the foreground judgment windowkCalculating the neighborhood peak gradient characteristic value of each sub-block by the following formula:
Z(k)=SMaxValue0-SMaxValuek*Table(i,j)k
wherein, SMaxValue0As the central subblock Wc0Is a sub-large value of SMaxValuekRepresents the central sub-block Wc0The k-th surrounding sub-block WckIs a sub-pixel large value of Table (i, j)kFor the front corresponding to the current k-th sub-blockAnd (k) determining the mapping table weight, wherein Z (k) is the neighborhood peak gradient characteristic value of the kth sub-block.
Step 4-3, calculating to obtain neighborhood peak gradient eigenvector U1={Z(1),Z(2),....Z(K)}。
In step 4, the average gradient MGradValue of each sub-block is calculatedkThe formula of (1) is:
wherein, MGradValuekRepresents the average gradient of the k-th sub-block, T is the window size,represents a sub-block WckThe gradient in the horizontal direction is such that,represents a sub-block WckGradient in vertical direction.
In step 4, a neighborhood step gradient characteristic vector U is calculated2The specific method comprises the following steps: calculating the neighborhood step gradient characteristic value of the central subblock by adopting the following formula:
Y(k)=MGradValue0-MGradValuek*∑Table(i,j)k
wherein, MGradValue0As the central subblock Wc0Y (k) is a neighborhood step gradient characteristic value of the k sub-block, MGradvaluekAs the central subblock Wc0The k-th surrounding sub-block WckCalculating to obtain neighborhood step gradient characteristic vector U2={Y(1),Y(2),...Y(k)}。
The step 5 comprises the following steps:
step 5-1, randomly searching a positive sample library and a negative sample library of the main window template (the positive sample library and the negative sample library of the main window template are manually established), selecting X (generally taking the value as 500) images, wherein the positive sample is a real target matrix image, the negative sample is a background matrix image, and calculating a neighborhood peak gradient eigenvector U of the sample according to the method in the step 4 respectively1And neighborhoodStep gradient eigenvector U2Generating a feature vector U ═ U (U)1,U2);
Step 5-2, carrying out model training by using libSVM, specifically referring to a libsVn et al 'LIBSVM parameter optimal parallelization algorithm based on Spark', selecting an RBF core to carry out positive and negative sample cross validation calculation to obtain an optimal parameter (C, gamma), wherein C is a penalty coefficient, and gamma is a width parameter of an RBF function;
step 5-3, generating a classification judgment weight phi (U) according to the optimal parameters (C, gamma)1,U2)。
In step 6, calculating the neighborhood contrast ratio of the foreground judgment window pixel matrix according to the following formula:
wherein, N is the width and height of the foreground judgment window pixel matrix WF (i, j), WF (i +1, j) is the pixel value at the coordinate (i +1, j), and WF (i, j +1) is the pixel value at the coordinate (i, j + 1);
the image global contrast ConRatioAll is calculated by the following formula:
where W, H are the width and height of the current frame image f (i, j), respectively, f (i, j) is the pixel value at the (i, j) coordinate in the image, f (i +1, j) is the pixel value at the (i +1, j) coordinate in the image, and f (i, j +1) is the pixel value at the (i, j +1) coordinate in the image.
The specific method for adaptively adjusting the step length of the sliding window in the step 7 comprises the following steps:
for the nth traversal search of the current frame image, the main window template WnHas a central position of (x)n,yn) N-th traversal time sub-window template SWnHas a size of M2×M2,SWnThe central position is (sx)n,syn) And adjusting the horizontal step length X _ SW and the vertical step length Y _ SW of the sliding window to satisfy the following formula:
X_SW=|sxn-xn+M2/2|,
Y_SW=min(|syn-1-yn-1+M2/2|,|syn-yn+M2/2|),
wherein (sx)n-1,syn-1) For the (n-1) th traversal, the child window template SWn-1(x) is located at the center ofn-1,yn-1) As a master window template Wn-1The center position of (a);
in step 8, the specific step of performing sequence motion characteristic judgment on the candidate target detection vector sequence is as follows:
step 8-1, calculating the center distance between candidate targets of the image of the Nth frame and the image of the (N +1) th frame, wherein if the center distance is lower than a threshold value DS, the value of DS is generally 20, the trace quality score PQ of a candidate target point is added with 1, and otherwise, the PQ is kept unchanged;
step 8-2, judging whether the (N + 2) th frame candidate trace meets the motion trace characteristics;
and 8-3, updating the center position of the candidate target of the (N + 2) th frame according to the motion trace characteristics.
In step 8-2, whether the candidate trace meets the motion trace characteristics is judged by adopting the following formula:
F=(α<amax&&max(Lw1,Lw2,Lw3)<Lwmax)
if the parameter F is 1, the current candidate target point satisfies the operationα is the target transition angle of three continuous frame images, amaxIs the maximum turning angle parameter, wherein Lw1、Lw2、Lw3Target points (x) of three consecutive frame images, respectivelyN,yN)、(xN+1,yN+1)、(xN+2,yN+2) Relative distance of movement, LwmaxIs the maximum gate parameter;
if the current target candidate trace meets the motion trace characteristics, the candidate target trace quality score PQ is added with 1, otherwise, PQ is subtracted with 1;
in step 8-3, the candidate target center position of the (N + 2) th frame is updated by the following formula:
x'N+2=λxN+2+(1-λ)(xN+1+(xN+1-xN)*T),
y'N+2=λyN+2+(1-λ)(yN+1+(yN+1-yN)*T),
wherein λ is a weighting factor parameter, and T is the time of two adjacent frames of images, (x'N+2,y'N+2) The candidate target center coordinates of the updated (N + 2) th frame image; and outputting the target point trace with the quality score larger than the threshold value NS.
In step 8, the static characteristic determination method specifically includes:
if the parameter Lws is 1, it indicates that the current candidate target point satisfies the stationary characteristic, where (x)N,yN)、(xN+M,yN+M) Respectively the center position of the target trace in the Nth frame and the center position of the target trace in the N + M frame, LwminIs the minimum wave gate parameter.
Has the advantages that:
the invention discloses a method for detecting infrared small and weak targets under a complex background based on neighborhood gradient, which solves the problems that small and weak targets in a photoelectric detection system are difficult to find in the complex background and the extraction accuracy is low. Constructing a self-adaptive sliding window fast search template, innovatively proposing the calculation of neighborhood peak gradient and neighborhood step gradient characteristics through a mapping table, and solving the problem of weak and small target characteristic description robustness in a complex background; the classification weight is judged by using libSVM optimization, so that the accuracy of the feature description model is improved; the method innovatively provides the adoption of neighborhood contrast judgment, motion characteristic estimation and static characteristic judgment on the candidate sequence, and obviously reduces the false alarm rate of target detection. The method is used for carrying out verification tests under the complex cloud background, ground objects, sea surface and other backgrounds, has an obvious effect of detecting weak and small targets, has the average single-frame processing time of less than 30ms, improves the detection accuracy by 10 percent, reduces the detection false alarm rate by 8 percent, and fully verifies the effectiveness of the method.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Fig. 1 is a flow chart of a method according to the invention.
Fig. 2 is a schematic diagram of determining the motion trace characteristics.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, according to an embodiment of the present invention, a method for detecting infrared weak and small targets under a complex background based on a domain gradient includes the following steps:
Step 2, calculating the sub-window template SW1Matrix SH1(i, j) obtaining a foreground judgment window pixel matrix WF (i, j), wherein WF (i, j) is H1(i,j)-min(SH1(i,j))·E,minAnd E is an identity matrix for the minimum operation of the matrix.
And 3, calculating a Mean value WF _ Mean and a variance WF _ Varce of the matrix WF (i, j), and generating a foreground judgment mapping Table (i, j) for the current pixel value WF (i, j).
Further, the foreground judgment mapping Table (i, j) is:
step 4, calculating each sub-block W of the foreground judgment window pixel matrix WF (i, j) according to the mapping Table Table (i, j)ckThe neighborhood peak gradient, the average gradient and the neighborhood step gradient of the sub-block are generated to generate a sub-block neighborhood gradient characteristic vector U1And neighborhood mean gradient eigenvalue U2。
Further, the central subblock neighborhood peak gradient eigenvalue is:
Z(k)=SMaxValue0-SMaxValuek*Table(i,j)k
wherein, SMaxValue0As the central subblock Wc0Is a sub-large value of SMaxValuekRepresents the k-th sub-block W aroundckIs a sub-pixel large value of Table (i, j)kAnd Z (k) is a neighborhood peak gradient characteristic value of the k sub-block.
Further, sub-block WcAverage gradient of MGradValuekComprises the following steps:
represents a sub-block WckThe gradient in the horizontal direction is such that,represents a sub-block WckGradient in vertical direction.
Further, the feature value of the neighborhood step gradient of the central subblock is as follows:
Y(k)=MGradValue0-MGradValuek*∑Table(i,j)k
wherein MGradValue0As the central subblock Wc0Average gradient of (4), MGradValuekRepresents the k-th sub-block W aroundckAverage gradient of (d), Table (i, j)kAnd Y (k) is a neighborhood step gradient characteristic value of the kth sub-block.
Step 5, constructing a positive and negative sample of a main window of the search template, and judging a subblock neighborhood gradient feature vector U of a window pixel matrix for the foreground1And neighborhood mean gradient eigenvalue U2Selecting RBF core by libSVM to judge classification weight to obtain phi (U)1,U2)。
Further, the RBF kernel is:
K(u,v)=exp(-γ*||u-v||2)
step 6, calculating the equation satisfying phi (U)1,U2) And judging the neighborhood contrast ratio ConRatio of the window pixel matrix as 1, comparing and judging the neighborhood contrast ratio ConRatio with the image global contrast ratio ConRatioAll, and if the ConRatio is more than or equal to the ConRatioAll, setting the current window target Flag bit as 1, otherwise, setting the Flag as 0.
Further, the neighborhood contrast ratio and the image global contrast ratio are respectively:
where N is the width and height of WF (i, j), W, H is the width and height, respectively, of the current frame image f (i, j), and f (i, j) is the pixel value at the (i, j) coordinate in the image.
And step 7, adaptively adjusting the step length of the sliding window to S, calculating to obtain the next sliding window template to realize the complete traversal of the current frame image, and sequentially completing the steps 2-6 to generate a candidate Target window vector Vec _ Target1 when the window Target Flag is 1.
Further, the step length of the sliding window is adaptively adjusted as follows: for the nth traversal search of the current frame image, the main window template WnHas a central position of (x)n,yn) Sub-window template SWnHas a size of 9 × 9, SWnThe central position is (sx)n,syn) And adjusting the horizontal step length X _ SW and the vertical step length Y _ SW of the sliding window to satisfy the following formula:
X_SW=|sxn-xn+4|
Y_SW=min(|syn-1-yn-1+M2/2|,|syn-yn+M2/2|)
wherein (sx)n-1,syn-1) For the n-1 st sub-window template SWn-1The center position of (a).
Step 8, performing the steps 1 to 7 on continuous 3 frames of images to obtain a candidate Target detection vector sequence [ Vec _ Target1, Vec _ Target2 and Vec _ Target n3], calculating the center distance between Vec _ Target1 and Vec _ Target2 candidate targets, if the center distance is lower than a threshold DS to be 10, adding 1 to the candidate Target point trajectory quality score PQ, otherwise, keeping PQ unchanged; calculating whether the candidate point trace in Vec _ Target2 meets the motion point trace characteristics, if the current Target candidate point trace meets the motion point trace characteristics, adding 1 to the candidate Target point trace quality score PQ, otherwise, subtracting 1 from PQ; and updating the center position of the candidate target of Vec _ TargetN3 according to the motion characteristics, and outputting the target point trace with the quality score larger than the threshold value NS which is 3.
Further, the specific method for judging the motion trace characteristics is as follows:
F=(α<amax&&max(Lw1,Lw2,Lw3)<Lwmax)
if F is equal to 1, it indicates that the current target point candidate does not conform to the motion trace characteristics, otherwise, it indicates that the current target point candidate does not conform to the motion trace characteristics, as shown in FIG. 2, where α is the target transition angle of three consecutive frames, amaxIs the maximum turning angle parameter, and takes the value ofWherein Lw1、Lw2、Lw3Target points (x) of three consecutive frame images, respectivelyN,yN)、(xN+1,yN+1)、(xN+2,yN+2) Relative distance of movement, LwmaxThe maximum gate parameter is 10.
Further, the candidate target center position (x ') of the N +2 th frame is updated'N+2,y'N+2) The method specifically comprises the following steps:
x'N+2=λxN+2+(1-λ)(xN+1+(xN+1-xN)*T)
y'N+2=λyN+2+(1-λ)(yN+1+(yN+1-yN)*T)
wherein λ is a weighting factor parameter, and its value is 0.8, T is the time of two adjacent frames of images, and T is generally 1 or 3 in a common search system.
Further, the static characteristic determination method specifically includes:
if Lws is equal to 1, it indicates that the current candidate target point matches the stationary characteristic feature. Wherein (x)N,yN)、(xN+M,yN+M) Target point trace center position, Lw, for the N, N +3 th frameminThe minimum gate parameter is 6.
The present invention provides a method for detecting infrared weak and small targets based on a complex background of neighborhood gradient, and a plurality of methods and approaches for implementing the technical scheme, and the above description is only a preferred embodiment of the present invention, it should be noted that, for those skilled in the art, a plurality of improvements and modifications may be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (14)
1. A method for detecting infrared dim targets under a complex background based on neighborhood gradient is characterized by comprising the following steps:
step 1, performing sliding window traversal on a current frame image f (i, j) to generate an initial search main window template W1And a sub-window template SW1;
Step 2, calculating the sub-window template SW1Matrix SH1(i, j) obtaining a foreground judgment window pixel matrix WF (i, j) according to the minimum pixel value MIN of the (i, j);
step 3, calculating a Mean value WF _ Mean and a variance WF _ Varce of the matrix WF (i, j) and generating a foreground judgment mapping Table (i, j);
step 4, calculating neighborhood peak gradient, average gradient and neighborhood step gradient of each sub-block of the foreground judgment window pixel matrix WF (i, j) according to the foreground judgment mapping Table Table (i, j), and generating a neighborhood peak gradient eigenvector U1And neighborhood step gradient eigenvector U2;
Step 5, carrying out neighborhood peak gradient eigenvector U1And neighborhood step gradient eigenvector U2Carrying out SVM classification weight judgment to obtain a classification judgment weight phi (U)1,U2);
Step 6, calculating the equation satisfying phi (U)1,U2) Judging the neighborhood contrast ratio ConRatio of the window pixel matrix as 1, comparing and judging the neighborhood contrast ratio ConRatio with the image global contrast ratio ConRatioAll, if the ConRatio is more than or equal to the ConRatioAll, setting the current window target Flag bit as 1, otherwise, setting the Flag as 0;
step 7, adaptively adjusting the step length of the sliding window to S, calculating to obtain a sliding window template of the next round, sequentially completing the steps 2 to 6, realizing the complete traversal of the current frame image, obtaining all windows meeting the condition that the window Target Flag bit Flag is 1, and generating a candidate Target window vector Vec _ Target 1;
and 8, performing the steps 1 to 7 on the continuous N frames of images to obtain a candidate Target detection vector sequence [ Vec _ Target1, Vec _ Target2 and … Vec _ Target N ], wherein Vec _ Target N represents a candidate Target window vector of the N frame of image, respectively performing sequence motion characteristic judgment and static characteristic judgment on the candidate Target detection vector sequence, and calculating to obtain weak and small Target detection information of the current frame of image.
2. The method of claim 1, wherein step 1 comprises:
step 1-1, performing sliding window traversal on a current frame image f (i, j), and setting the size of a main window template to be M1×M1In the current frame image, the center position (M) of the main window template is used1/2,M1/2) constructing an image search main window template W1(ii) a Wherein M is1The width and height of the main window template;
step 1-2, obtaining a main window template W1Image matrix H1In a matrix H1The pixel value h (i) and the pixel position l (i) of (A) construct a two-dimensional vector V (h (i), l (i)), and the pixel value is taken as a main weight value to carry out sequencing from small to large to generate a sequenced vector V' (h (i), l (i)), wherein the value of i is 0,11×M1-1;
Step 1-3, setting the size of the sub-window template as M2×M2Wherein M is2The width and height of the window are determined for the foreground, and a sub-window template SW is generated centering on the sub-maximum pixel position V' (h (i-1), l (i-1)) of the main window template image1。
3. The method of claim 2, wherein in step 2, the calculation of the sub-window template SW is calculated by the following formula1Matrix SH1(i, j) is the mostSmall pixel value MIN and foreground decision window pixel matrix WF (i, j):
MIN=min(SH1(i,j)),
WF(i,j)=SH1(i,j)-MIN·E,
4. The method of claim 3, wherein step 3 comprises:
step 3-1, calculating a Mean value WF _ Mean and a variance WF _ Varce of the matrix WF (i, j):
step 3-2, respectively traversing each pixel of the matrix WF (i, j), and performing the following judgment on the current pixel value WF (i, j) to obtain a foreground judgment mapping Table (i, j):
5. the method of claim 4, wherein in step 4, a neighborhood peak gradient eigenvector U is calculated1The specific method comprises the following steps:
step 4-1, setting the sub-block WcThe size is T × T, and the foreground judgment window pixel matrix WF (i, j) is divided into k sub-blocks;
step 4-2, respectively calculating the pixel sub-maximum value SM in each sub-block in the pixel matrix WF (i, j) of the foreground judgment windowaxValuekCalculating the neighborhood peak gradient characteristic value of each sub-block by the following formula:
Z(k)=SMaxValue0-SMaxValuek*Table(i,j)k
wherein, SMaxValue0As the central subblock Wc0Is a sub-large value of SMaxValuekRepresents the central sub-block Wc0The k-th surrounding sub-block WckIs a sub-pixel large value of Table (i, j)kJudging mapping table weight values for the foreground corresponding to the current kth sub-block, wherein Z (k) is a neighborhood peak gradient characteristic value of the kth sub-block;
step 4-3, calculating to obtain neighborhood peak gradient eigenvector U1={Z(1),Z(2),....Z(K)}。
6. The method of claim 5, wherein in step 4, the average gradient MGradValue of each sub-block is calculatedkThe formula of (1) is:
7. The method of claim 6, wherein in step 4, a neighborhood step gradient eigenvector U is calculated2The specific method comprises the following steps: calculating the neighborhood step gradient characteristic value of the central subblock by adopting the following formula:
Y(k)=MGradValue0-MGradValuek*∑Table(i,j)k
wherein, MGradValue0As the central subblock Wc0Average gradient of (4), MGradValuekAs the central subblock Wc0The k-th surrounding sub-block WckY (k) is the neighborhood step gradient characteristic value of the kth sub-block, and a neighborhood step gradient characteristic vector U is obtained by calculation2={Y(1),Y(2),...Y(k)}。
8. The method of claim 7, wherein step 5 comprises:
step 5-1, randomly searching a positive sample library and a negative sample library of the main window template, selecting X images, wherein the positive sample is a real target matrix image, the negative sample is a background matrix image, and calculating a sample neighborhood peak gradient eigenvector U according to the method in the step 4 respectively1And neighborhood step gradient eigenvector U2Generating a feature vector U ═ U (U)1,U2);
Step 5-2, carrying out model training by using a libSVM, selecting an RBF core to carry out positive and negative sample cross validation calculation to obtain optimal parameters (C, gamma), wherein C is a penalty coefficient, and gamma is a width parameter of an RBF function;
step 5-3, generating a classification judgment weight phi (U) according to the optimal parameters (C, gamma)1,U2)。
9. The method of claim 8, wherein in step 6, the neighborhood contrast ratio ConRatio of the foreground decision window pixel matrix is calculated by the following formula:
wherein, N is the width and height of the foreground judgment window pixel matrix WF (i, j), WF (i +1, j) is the pixel value at the coordinate (i +1, j), and WF (i, j +1) is the pixel value at the coordinate (i, j + 1);
the image global contrast ConRatioAll is calculated by the following formula:
where W, H are the width and height of the current frame image f (i, j), respectively, f (i, j) is the pixel value at the (i, j) coordinate in the image, f (i +1, j) is the pixel value at the (i +1, j) coordinate in the image, and f (i, j +1) is the pixel value at the (i, j +1) coordinate in the image.
10. The method as claimed in claim 9, wherein the step of adaptively adjusting the step size of the sliding window in step 7 is specifically:
for the nth traversal search of the current frame image, the main window template WnHas a central position of (x)n,yn) N-th traversal time sub-window template SWnHas a size of M2×M2,SWnThe central position is (sx)n,syn) And adjusting the horizontal step length X _ SW and the vertical step length Y _ SW of the sliding window to satisfy the following formula:
X_SW=|sxn-xn+M2/2|,
Y_SW=min(|syn-1-yn-1+M2/2|,|syn-yn+M2/2|),
wherein (sx)n-1,syn-1) For the (n-1) th traversal, the child window template SWn-1(x) is located at the center ofn-1,yn-1) As a master window template Wn-1The center position of (a).
11. The method as claimed in claim 10, wherein in step 8, the step of determining the sequence motion characteristics of the candidate target detection vector sequence comprises:
step 8-1, calculating the center distance between the candidate targets of the image of the Nth frame and the image of the N +1 frame, if the center distance is lower than a threshold value DS, adding 1 to the trace quality score PQ of the candidate target point, otherwise, keeping PQ unchanged;
step 8-2, judging whether the (N + 2) th frame candidate trace meets the motion trace characteristics;
and 8-3, updating the center position of the candidate target of the (N + 2) th frame according to the motion trace characteristics.
12. The method as claimed in claim 11, wherein in step 8-2, the following formula is used to determine whether the candidate traces satisfy the motion trace characteristics:
F=(α<amax&&max(Lw1,Lw2,Lw3)<Lwmax)
if the parameter F is 1, the current candidate target point is represented to satisfy the motion tracing characteristic, otherwise, the current candidate target point is represented not to satisfy the motion tracing characteristic, α is the target transition angle of the continuous three-frame image, amaxIs the maximum turning angle parameter, wherein Lw1、Lw2、Lw3Target points (x) of three consecutive frame images, respectivelyN,yN)、(xN+1,yN+1)、(xN+2,yN+2) Relative distance of movement, LwmaxIs the maximum gate parameter;
and if the current target candidate point trace meets the motion point trace characteristics, adding 1 to the candidate target point trace quality score PQ, and otherwise, subtracting 1 from PQ.
13. The method as claimed in claim 12, wherein, in step 8-3, the candidate target center position of the N +2 th frame is updated by the following formula:
x'N+2=λxN+2+(1-λ)(xN+1+(xN+1-xN)*T),
y'N+2=λyN+2+(1-λ)(yN+1+(yN+1-yN)*T),
wherein λ is a weighting factor parameter, and T is the time of two adjacent frames of images, (x'N+2,y'N+2) The candidate target center coordinates of the updated (N + 2) th frame image; and outputting the target point trace with the quality score larger than the threshold value NS.
14. The method as claimed in claim 13, wherein in step 8, the static characteristic is determined by:
if the parameter Lws is 1, it indicates that the current candidate target point satisfies the stationary characteristic, where (x)N,yN)、(xN+M,yN+M) Respectively the center position of the target trace in the Nth frame and the center position of the target trace in the N + M frame, LwminIs the minimum wave gate parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010384657.8A CN111681262B (en) | 2020-05-08 | 2020-05-08 | Method for detecting infrared dim target under complex background based on neighborhood gradient |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010384657.8A CN111681262B (en) | 2020-05-08 | 2020-05-08 | Method for detecting infrared dim target under complex background based on neighborhood gradient |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111681262A true CN111681262A (en) | 2020-09-18 |
CN111681262B CN111681262B (en) | 2021-09-03 |
Family
ID=72452572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010384657.8A Active CN111681262B (en) | 2020-05-08 | 2020-05-08 | Method for detecting infrared dim target under complex background based on neighborhood gradient |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111681262B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112489034A (en) * | 2020-12-14 | 2021-03-12 | 广西科技大学 | Modeling method based on time domain information characteristic space background |
CN112967305A (en) * | 2021-03-24 | 2021-06-15 | 南京莱斯电子设备有限公司 | Image cloud background detection method under complex sky scene |
CN115238753A (en) * | 2022-09-21 | 2022-10-25 | 西南交通大学 | Self-adaptive SHM data cleaning method based on local outlier factor |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1325463B1 (en) * | 2000-10-10 | 2008-12-17 | Lockheed Martin | Balanced object tracker in an image sequence |
CN103336965A (en) * | 2013-07-18 | 2013-10-02 | 江西省电力公司检修分公司 | Prospect and feature extraction method based on outline differences and principal direction histogram of block |
CN103472445A (en) * | 2013-09-18 | 2013-12-25 | 电子科技大学 | Detecting tracking integrated method for multi-target scene |
CN104951775A (en) * | 2015-07-15 | 2015-09-30 | 攀钢集团攀枝花钢钒有限公司 | Video technology based secure and smart recognition method for railway crossing protection zone |
CN105260749A (en) * | 2015-11-02 | 2016-01-20 | 中国电子科技集团公司第二十八研究所 | Real-time target detection method based on oriented gradient two-value mode and soft cascade SVM |
CN106251362A (en) * | 2016-07-15 | 2016-12-21 | 中国电子科技集团公司第二十八研究所 | A kind of sliding window method for tracking target based on fast correlation neighborhood characteristics point and system |
CN108549891A (en) * | 2018-03-23 | 2018-09-18 | 河海大学 | Multi-scale diffusion well-marked target detection method based on background Yu target priori |
CN108764163A (en) * | 2018-05-30 | 2018-11-06 | 合肥工业大学 | CFAR detection methods based on gray scale correlation properties under target-rich environment |
-
2020
- 2020-05-08 CN CN202010384657.8A patent/CN111681262B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1325463B1 (en) * | 2000-10-10 | 2008-12-17 | Lockheed Martin | Balanced object tracker in an image sequence |
CN103336965A (en) * | 2013-07-18 | 2013-10-02 | 江西省电力公司检修分公司 | Prospect and feature extraction method based on outline differences and principal direction histogram of block |
CN103472445A (en) * | 2013-09-18 | 2013-12-25 | 电子科技大学 | Detecting tracking integrated method for multi-target scene |
CN104951775A (en) * | 2015-07-15 | 2015-09-30 | 攀钢集团攀枝花钢钒有限公司 | Video technology based secure and smart recognition method for railway crossing protection zone |
CN105260749A (en) * | 2015-11-02 | 2016-01-20 | 中国电子科技集团公司第二十八研究所 | Real-time target detection method based on oriented gradient two-value mode and soft cascade SVM |
CN106251362A (en) * | 2016-07-15 | 2016-12-21 | 中国电子科技集团公司第二十八研究所 | A kind of sliding window method for tracking target based on fast correlation neighborhood characteristics point and system |
CN108549891A (en) * | 2018-03-23 | 2018-09-18 | 河海大学 | Multi-scale diffusion well-marked target detection method based on background Yu target priori |
CN108764163A (en) * | 2018-05-30 | 2018-11-06 | 合肥工业大学 | CFAR detection methods based on gray scale correlation properties under target-rich environment |
Non-Patent Citations (1)
Title |
---|
王博洋: "《基于多方向环形梯度法的红外小目标检测技术研究》", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112489034A (en) * | 2020-12-14 | 2021-03-12 | 广西科技大学 | Modeling method based on time domain information characteristic space background |
CN112967305A (en) * | 2021-03-24 | 2021-06-15 | 南京莱斯电子设备有限公司 | Image cloud background detection method under complex sky scene |
CN112967305B (en) * | 2021-03-24 | 2023-10-13 | 南京莱斯电子设备有限公司 | Image cloud background detection method under complex sky scene |
CN115238753A (en) * | 2022-09-21 | 2022-10-25 | 西南交通大学 | Self-adaptive SHM data cleaning method based on local outlier factor |
CN115238753B (en) * | 2022-09-21 | 2022-12-06 | 西南交通大学 | Self-adaptive SHM data cleaning method based on local outlier factor |
Also Published As
Publication number | Publication date |
---|---|
CN111681262B (en) | 2021-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427839B (en) | Video target detection method based on multi-layer feature fusion | |
Braham et al. | Semantic background subtraction | |
CN111062273B (en) | Method for tracing, detecting and alarming remaining articles | |
CN111681262B (en) | Method for detecting infrared dim target under complex background based on neighborhood gradient | |
CN105654516B (en) | Satellite image based on target conspicuousness is to ground weak moving target detection method | |
CN104835145B (en) | Foreground detection method based on adaptive Codebook background models | |
CN111260738A (en) | Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion | |
CN109919026A (en) | A kind of unmanned surface vehicle local paths planning method | |
Patil et al. | Motion saliency based generative adversarial network for underwater moving object segmentation | |
CN111666871A (en) | Improved YOLO and SIFT combined multi-small-target detection and tracking method for unmanned aerial vehicle | |
CN109886079A (en) | A kind of moving vehicles detection and tracking method | |
Niu et al. | A moving objects detection algorithm based on improved background subtraction | |
CN110633727A (en) | Deep neural network ship target fine-grained identification method based on selective search | |
CN112862845A (en) | Lane line reconstruction method and device based on confidence evaluation | |
CN110827262B (en) | Weak and small target detection method based on continuous limited frame infrared image | |
CN107122732B (en) | High-robustness rapid license plate positioning method in monitoring scene | |
CN112613565B (en) | Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating | |
CN112053385B (en) | Remote sensing video shielding target tracking method based on deep reinforcement learning | |
CN113516713A (en) | Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network | |
CN111428573B (en) | Infrared weak and small target detection false alarm suppression method under complex background | |
Hodne et al. | Detecting and suppressing marine snow for underwater visual slam | |
CN116229359A (en) | Smoke identification method based on improved classical optical flow method model | |
CN111275733A (en) | Method for realizing rapid tracking processing of multiple ships based on deep learning target detection technology | |
Zhou et al. | Dynamic background subtraction using spatial-color binary patterns | |
Cheng et al. | A novel improved ViBe algorithm to accelerate the ghost suppression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |