Fast and Robust Infrared Small Target Detection Using Weighted Local Difference Variance Measure
<p>The proposed algorithm framework.</p> "> Figure 2
<p>(<b>a</b>) The double-layer window. (<b>b</b>) The tri-layer window. (<b>c</b>) The new tri-layer filtering window. (<b>d</b>) Gaussian distribution characteristics of IR small target. (<b>e</b>) Situations that the algorithm needs to handle.</p> "> Figure 3
<p>SMs of different algorithms. (<b>a1</b>–<b>a9</b>) Nine representative IR images. (<b>b1</b>–<b>b9</b>) LCM. (<b>c1</b>–<b>c9</b>) MPCM. (<b>d1</b>–<b>d9</b>) RLCM. (<b>e1</b>–<b>e9</b>) TLLCM. (<b>f1</b>–<b>f9</b>) VAR_DIFF. (<b>g1</b>–<b>g9</b>) ADMD. (<b>h1</b>–<b>h9</b>) WSLCM. (<b>i1</b>–<b>i9</b>) Proposed methods. (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math>), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>d</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>d</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>f</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>f</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>g</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>g</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>h</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>h</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), and (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mrow> <mi mathvariant="normal">i</mi> <mn mathvariant="bold">1</mn> </mrow> </mrow> <mo stretchy="true">¯</mo> </mover> <mtext>–</mtext> <mover accent="true"> <mrow> <mrow> <mi mathvariant="normal-bold">i</mi> <mn mathvariant="bold">9</mn> </mrow> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ) are the 3D gray distribution maps of (<b>a1</b>–<b>a9</b>), (<b>b1</b>–<b>b9</b>), (<b>c1</b>–<b>c9</b>), (<b>d1</b>–<b>d9</b>), (<b>e1</b>–<b>e9</b>), (<b>f1</b>–<b>f9</b>), (<b>g1</b>–<b>g9</b>), (<b>h1</b>–<b>h9</b>), and (<b>i1</b>–<b>i9</b>), respectively.</p> "> Figure 3 Cont.
<p>SMs of different algorithms. (<b>a1</b>–<b>a9</b>) Nine representative IR images. (<b>b1</b>–<b>b9</b>) LCM. (<b>c1</b>–<b>c9</b>) MPCM. (<b>d1</b>–<b>d9</b>) RLCM. (<b>e1</b>–<b>e9</b>) TLLCM. (<b>f1</b>–<b>f9</b>) VAR_DIFF. (<b>g1</b>–<b>g9</b>) ADMD. (<b>h1</b>–<b>h9</b>) WSLCM. (<b>i1</b>–<b>i9</b>) Proposed methods. (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math>), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>d</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>d</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>f</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>f</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>g</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>g</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>h</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>h</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), and (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mrow> <mi mathvariant="normal">i</mi> <mn mathvariant="bold">1</mn> </mrow> </mrow> <mo stretchy="true">¯</mo> </mover> <mtext>–</mtext> <mover accent="true"> <mrow> <mrow> <mi mathvariant="normal-bold">i</mi> <mn mathvariant="bold">9</mn> </mrow> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ) are the 3D gray distribution maps of (<b>a1</b>–<b>a9</b>), (<b>b1</b>–<b>b9</b>), (<b>c1</b>–<b>c9</b>), (<b>d1</b>–<b>d9</b>), (<b>e1</b>–<b>e9</b>), (<b>f1</b>–<b>f9</b>), (<b>g1</b>–<b>g9</b>), (<b>h1</b>–<b>h9</b>), and (<b>i1</b>–<b>i9</b>), respectively.</p> "> Figure 3 Cont.
<p>SMs of different algorithms. (<b>a1</b>–<b>a9</b>) Nine representative IR images. (<b>b1</b>–<b>b9</b>) LCM. (<b>c1</b>–<b>c9</b>) MPCM. (<b>d1</b>–<b>d9</b>) RLCM. (<b>e1</b>–<b>e9</b>) TLLCM. (<b>f1</b>–<b>f9</b>) VAR_DIFF. (<b>g1</b>–<b>g9</b>) ADMD. (<b>h1</b>–<b>h9</b>) WSLCM. (<b>i1</b>–<b>i9</b>) Proposed methods. (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math>), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>d</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>d</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>e</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>f</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>f</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>g</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>g</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>h</mi> </mstyle> <mn mathvariant="bold">1</mn> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>–</mo> <mover accent="true"> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>h</mi> </mstyle> <mn mathvariant="bold">9</mn> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ), and (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mrow> <mi mathvariant="normal">i</mi> <mn mathvariant="bold">1</mn> </mrow> </mrow> <mo stretchy="true">¯</mo> </mover> <mtext>–</mtext> <mover accent="true"> <mrow> <mrow> <mi mathvariant="normal-bold">i</mi> <mn mathvariant="bold">9</mn> </mrow> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> ) are the 3D gray distribution maps of (<b>a1</b>–<b>a9</b>), (<b>b1</b>–<b>b9</b>), (<b>c1</b>–<b>c9</b>), (<b>d1</b>–<b>d9</b>), (<b>e1</b>–<b>e9</b>), (<b>f1</b>–<b>f9</b>), (<b>g1</b>–<b>g9</b>), (<b>h1</b>–<b>h9</b>), and (<b>i1</b>–<b>i9</b>), respectively.</p> "> Figure 4
<p>The ROC curves of different detection methods. (<b>a</b>−<b>i</b>) The experimental results of datasets 1−9.</p> "> Figure 5
<p>(<b>a</b>) Representative images of raw IR dataset 3. (<b>b</b>) Gaussian white noise. (<b>c</b>) Poisson noise. (<b>d</b>) Rayleigh noise. (<b>e</b>) Multiplicative noise. (<b>f</b>) Uniform noise. (<b>a1</b>–<b>f1</b>) The 3D gray distribution maps of (<b>a</b>–<b>f</b>).</p> "> Figure 6
<p>The number of missed images in different algorithms under the influence of noise.</p> "> Figure 7
<p>The number of false-alarm images in different algorithms under the influence of noise.</p> ">
Abstract
:1. Introduction
- The new tri-layer filtering window is proposed. The target area is divided according to its distribution characteristics and size, which can adapt to the detection of targets of different scales and save detection time.
- WIL is proposed. Each layer of window uses the mean of the two largest subblock averages instead of the single largest subblock average to better capture the target and suppress edge noise.
- LDVM is proposed. Through the idea of local fluctuation, the target area is further enhanced, and the high-brightness background is eliminated.
- A detection framework based on WLDVM is proposed. The experimental results using multiple sets of IR datasets show that the proposed algorithm has the best detection performance and consumes less time.
2. Proposed Algorithm
2.1. Gaussian Filtering Pre-Processing
2.2. Construction of the New Tri-Layer Filtering Window
2.3. Calculation of the Window Intensity Level (WIL)
- 1.
- For the inner layer:
- 2.
- For the middle and outer layers:
2.4. Local Difference Variance Measure (LDVM)
2.5. Weighting Function
2.6. Weighted Local Difference Variance Measure (WLDVM)
2.7. Threshold Operation
- For a pixel in the real target area, since the target area often presents a compact two-dimensional Gaussian shape, its will be large and , and its and weight will be very large. Hence, the resulting value of will be large.
- For a pixel in the pure background area, since the pure background area is often continuous and evenly distributed, its and , then its and . Therefore, .
- For a pixel at the edge of the background, its may be greater than 0 but less than that of the true target, so its is much less than that of the true target; in addition, may be greater than 1, but its enhancement effect is not much different from the local background estimation, so the corresponding will be less than the true target’s . Hence, its is much less than that of the true target.
- For a pixel in the PNHB area, its will be less than that of the true target, and thus its will be less than the true target’s ; in addition, its will be less than the true target’s . Hence, its is much less than that of the true target.
3. Experimental Results
- LCM had a TPR greater than 0.9 and less than 1 in dataset 4, and performed poorly in other datasets;
- MPCM had a TPR greater than 0.9 and less than 1 in dataset 5, and performed poorly in other datasets;
- RLCM achieved the highest TPR in dataset 2, and performed poorly in other datasets;
- TLLCM achieved the highest TPR in dataset 2, TPR greater than 0.9 and less than 1 in dataset 8, while performing poorly in other datasets;
- VAR_DIFF achieved the highest TPR in datasets 2 and 4, TPR greater than 0.8 and less than 1 in datasets 1, 7, 8, and 9, and performed poorly in other datasets;
- ADMD achieved the highest TPR in dataset 7, while performing poorly in other datasets;
- WSLCM achieved the highest TPR in datasets 2, 4, 6, 7, and 8, and the TPR was greater than 0.8 and less than 1 in datasets 1, 3, 5, and 9;
- The proposed algorithm achieved the highest TPR in all nine datasets.
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kastberger, G.; Stachl, R. Infrared imaging technology and biological applications. Behav. Res. Methods Instrum. Comput. 2003, 35, 429–439. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jong, A. IRST and its perspective. Proc. SPIE Int. Soc. Opt. Eng. 1995, 2552, 206–213. [Google Scholar]
- Fang, B.; Chen, T. Study on the Key Techniques of the Imaging Infrared Guidance for AAM. Infrared Technol. 2003, 25, 45–48. [Google Scholar]
- Malanowski, M.; Kulpa, K. Detection of moving targets with continuous-wave noise radar: Theory and measurements. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3502–3509. [Google Scholar] [CrossRef]
- Chen, C.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2013, 52, 574–581. [Google Scholar] [CrossRef]
- Yao, S.; Chang, Y.; Qin, X. A Coarse-to-Fine Method for Infrared Small Target Detection. IEEE Geosci. Remote Sens. 2019, 16, 256–260. [Google Scholar] [CrossRef]
- Zhang, P.; Wang, X.; Wang, X.; Fei, C.; Guo, Z. Infrared small target detection based on spatial-temporal enhancement using quaternion discrete cosine transform. IEEE Access 2019, 7, 54712–54723. [Google Scholar] [CrossRef]
- Nie, J.; Qu, S.; Wei, Y.; Zhang, L.; Deng, L. An Infrared Small Target Detection Method Based on Multiscale Local Homogeneity Measure. Infrared Phys. Technol. 2018, 90, 186–194. [Google Scholar] [CrossRef]
- Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; Hauptmann, A. Infrared patch-image model for small target detection in a single image. IEEE Trans. Image Process. 2013, 22, 4996–5009. [Google Scholar] [CrossRef]
- Qin, Y.; Li, B. Effective infrared small target detection utilizing a novel local contrast method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1890–1894. [Google Scholar] [CrossRef]
- Shao, X.; Fan, H.; Lu, G.; Xu, J. An improved infrared dim and small target detection algorithm based on the contrast mechanism of human visual system. Infrared Phys. Technol. 2012, 55, 403–408. [Google Scholar] [CrossRef]
- Rivest, J.; Fortin, R. Detection of dim targets in digital infrared imagery by morphological image processing. Opt. Eng. 1996, 35, 1886–1893. [Google Scholar] [CrossRef]
- Bai, X.; Zhou, F. Analysis of new top-hat transformation and the application for infrared dim small target detection. Pattern Recognit. 2010, 43, 2145–2156. [Google Scholar] [CrossRef]
- Hou, W.; Lei, Z.; Yu, Q.; Liu, X. Small target detection using main directional suppression high pass filter. Optik 2014, 125, 3017–3022. [Google Scholar] [CrossRef]
- Xin, Y.; Zhou, J.; Chen, Y. Dual multi-scale filter with SSS and GW for infrared small target detection. Infrared Phys. Technol. 2017, 81, 97–108. [Google Scholar] [CrossRef]
- Dai, Y.; Wu, Y. Reweighted infrared patch-tensor model with both nonlocal and local priors for single-frame small target detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3752–3767. [Google Scholar] [CrossRef] [Green Version]
- He, Y.; Li, M.; Zhang, J.; An, Q. Small infrared target detection based on low-rank and sparse representation. Infrared Phys. Technol. 2015, 68, 98–109. [Google Scholar] [CrossRef]
- Zheng, C.; Li, H. Small infrared target detection based on low-rank and sparse matrix decomposition. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Zurich, Switzerland, 2013; Volume 239, pp. 214–218. [Google Scholar]
- Dai, Y.; Wu, Y.; Song, Y. Infrared small target and background separation via column-wise weighted robust principal component analysis. Infrared Phys. Technol. 2016, 77, 421–430. [Google Scholar] [CrossRef]
- Zhang, Y.; Zheng, Y.; Li, X. Multi-Scale Strengthened Directional Difference Algorithm Based on the Human Vision System. Sensors 2022, 22, 10009. [Google Scholar] [CrossRef]
- Wang, H.; Zhou, L.; Wang, L. Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2019, Seoul, Republic of Korea, 27–28 October 2019; pp. 8509–8518. [Google Scholar]
- Xu, X.; Sun, Y.; Ding, L.; Yang, F. A Novel Infrared Small Target Detection Algorithm Based on Deep Learning. In Proceedings of the 2020 4th International Conference on Advances in Image Processing, Chengdu China, 13–15 November 2020; pp. 8–14. [Google Scholar]
- Lin, L.; Wang, S.; Tang, Z. Using deep learning to detect small targets in infrared oversampling images. J. Syst. Eng. Electron. 2018, 29, 947–952. [Google Scholar]
- Ryu, J.; Kim, S. Small infrared target detection by data-driven proposal and deep learning-based classification. In Proceedings of the Infrared Technology and Applications XLIV, Orlando, FL, USA, 16–19 April 2018; SPIE: Bellingham, WA, USA, 2018; Volume 10624, pp. 134–143. [Google Scholar]
- Han, J.; Ma, Y.; Zhou, B.; Fan, F.; Liang, K.; Fang, Y. A robust infrared small target detection algorithm based on human visual system. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2168–2172. [Google Scholar]
- Han, J.; Liang, K.; Zhou, B.; Zhu, X.; Zhao, J.; Zhao, L. Infrared small target detection utilizing the multiscale relative local contrast measure. IEEE Geosci. Remote Sens. Lett. 2018, 15, 612–616. [Google Scholar] [CrossRef]
- Han, J.; Yu, Y.; Liang, K.; Zhang, H. Infrared small-target detection under complex background based on subblock-level ratio-difference joint local contrast measure. Opt. Eng. 2018, 57, 103105. [Google Scholar] [CrossRef]
- Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226. [Google Scholar] [CrossRef]
- Han, J.; Moradi, S.; Faramarzi, I.; Liu, C.; Zhang, H.; Zhao, Q. A Local Contrast Method for Infrared Small-Target Detection Utilizing a Tri-Layer Window. IEEE Geosci. Remote Sens. 2020, 17, 1822–1826. [Google Scholar] [CrossRef]
- Moradi, S.; Moallem, P.; Sabahi, M.F. Fast and robust small infrared target detection using absolute directional mean difference algorithm. Signal Process. 2020, 117, 107727. [Google Scholar] [CrossRef]
- Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Entropy-based window selection for detecting dim and small infrared targets. Pattern Recognit. 2017, 61, 66–77. [Google Scholar] [CrossRef]
- Nasiri, M.; Chehresa, S. Infrared small target enhancement based on variance difference. Infrared Phys. Technol. 2017, 82, 107–119. [Google Scholar] [CrossRef]
- Liu, J.; He, Z.; Chen, Z.; Shao, L. Tiny and dim infrared target detection based on weighted local contrast. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1780–1784. [Google Scholar] [CrossRef]
- Lv, P.; Sun, S.; Lin, C.; Liu, G. A method for weak target detection based on human visual contrast mechanism. IEEE Geosci. Remote Sens. Lett. 2019, 16, 261–265. [Google Scholar] [CrossRef]
- Han, J.; Moradi, S.; Faramarzi, I.; Zhang, H.; Zhao, Q.; Zhang, X.; Li, N. Infrared small target detection based on the weighted strengthened local contrast measure. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1670–1674. [Google Scholar] [CrossRef]
- Moradi, S.; Moallem, P.; Sabahi, M. Scale-space point spread function based framework to boost infrared target detection algorithms. Infrared Phys. Technol. 2016, 77, 27–34. [Google Scholar] [CrossRef]
- Guan, X.; Peng, Z.; Huang, S.; Chen, Y. Gaussian scale-space enhanced local contrast measure for small infrared target detection. IEEE Geosci. Remote Sens. Lett. 2019, 17, 327–331. [Google Scholar] [CrossRef]
- Du, P.; Hamdulla, A. Infrared moving small-target detection using spatial–temporal local difference measure. IEEE Geosci. Remote Sens. 2019, 17, 1817–1821. [Google Scholar] [CrossRef]
- Wu, W.; Hamdulla, A. Space-time-based Detection of Infrared Small Moving Target. J. Appl. Sci. Eng. 2020, 23, 497–507. [Google Scholar]
- Rawat, S.; Verma, K.; Kumar, Y. Review on recent development in infrared small target detection algorithms. Procedia Comput. Sci. 2020, 167, 2496–2505. [Google Scholar] [CrossRef]
- Shahraki, H.; Aalaei, S.; Moradi, S. Infrared small target detection based on the dynamic particle swarm optimization. Infrared Phys. Technol. 2021, 117, 103837. [Google Scholar] [CrossRef]
- Hui, B.; Song, Z.; Fan, H.; Zhong, P.; Hu, W.; Zhang, X.; Zhang, Y. A dataset for infrared image dim-small aircraft target detection and tracking under ground/air background. Sci. Data Bank. 2019, 5, 12. [Google Scholar]
- Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Asymmetric contextual modulation for infrared small target detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 950–959. [Google Scholar]
- Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Infrared small-target detection using multiscale gray difference weighted image entropy. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 60–72. [Google Scholar] [CrossRef]
Number of Images | Image Size | Target Size | Target Number | Dataset Type | Target Detail | Background Detail | |
---|---|---|---|---|---|---|---|
Dataset 1 [38] | 170 | 1 | Real sequence | Point target, incomplete occlusion | Complex clouds, complex background | ||
Dataset 2 [39] | 429 | 1 | Simulated sequence | Point target, low contrast | Heavy noise, dim background | ||
Dataset 3 [5] | 30 | 1 | Real sequence | Fast-moving, low contrast | Heavy noise, dense clouds | ||
Dataset 4 [40] | 39 | 1 | Real sequence | Aircraft target, fast-moving | Changing background, thin clouds | ||
Dataset 5 [41] | 50 | 1 | Simulated sequence | Point target, incomplete occlusion | Remaining almost the same Thin clouds | ||
Dataset 6 [41] | 50 | 1 | Simulated sequence | Point target, fast-moving | Heavy noise, complex background | ||
Dataset 7 [41] | 50 | 1 | Simulated sequence | Point target, low contrast | Multiple buildings, heavy noise | ||
Dataset 8 [42] | 100 | 1 | Simulated sequence | Point target, continuously moving | Heavy noise, land background | ||
Dataset 9 [43] | 152 | Variety | 1 | Non-sequential | Variety | Variety |
Dataset | ||||||||||||
1 | 68 | 7 | 4 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | |
2 | 130 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
3 | 30 | 3 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
4 | 39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
5 | 50 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
6 | 50 | 11 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
7 | 50 | 30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
8 | 100 | 100 | 57 | 14 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | |
9 | 125 | 20 | 6 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | |
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 37 | 118 | 170 | |
2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 40 | 294 | 429 | |
3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 8 | 24 | 30 | |
4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 27 | 39 | |
5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 9 | 24 | 50 | |
6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 27 | 50 | |
7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 29 | 50 | |
8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 54 | 92 | 100 | |
9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | 32 | 83 | 152 |
SCRG | Dataset | LCM | MPCM | RLCM | TLLCM | VAR_DIFF | ADMD | WSLCM | Proposed |
1 | 0.6001 | 2.0229 | 0.7273 | 2.2439 | 28.3396 | 25.9264 | 26.8277 | 44.0452 | |
2 | 0.3256 | 2.2193 | 0.6574 | 1.6246 | 14.1093 | 11.7270 | 25.3549 | 27.8967 | |
3 | 0.6322 | 1.0671 | 0.7903 | 1.0808 | 0.9182 | 16.6085 | 30.5386 | 105.4981 | |
4 | 0.2055 | 0.2958 | 0.2443 | 0.8030 | 6.9673 | 0.3418 | 3.8525 | 18.2806 | |
5 | 0.8549 | 0.6388 | 0.7918 | 0.8555 | 4.5399 | 23.1745 | 21.2844 | 51.1140 | |
6 | 0.7983 | 1.1333 | 0.9134 | 1.2898 | 5.3268 | 21.8810 | 52.8131 | 38.8965 | |
7 | 0.3171 | 1.6963 | 0.4042 | 0.7671 | 1.4541 | 3.2991 | 15.9780 | 9.5035 | |
8 | 0.4021 | 3.4164 | 0.6887 | 1.5680 | 2.5136 | 10.5827 | 30.0565 | 39.5189 | |
9 | 0.3886 | 1.8286 | 0.7166 | 1.9683 | 24.2053 | 12.2716 | 21.7523 | 23.6554 | |
Mean | 0.5027 | 1.5909 | 0.6593 | 1.3557 | 9.8193 | 13.9792 | 25.3842 | 39.8232 | |
BSF | 1 | 0.3185 | 1.6783 | 0.8350 | 0.6472 | 115.0330 | 12.6716 | 372.7078 | 1144.7466 |
2 | 0.9805 | 3.6617 | 2.0815 | 1.7074 | 212.7321 | 17.2603 | 758.0712 | 3894.0449 | |
3 | 2.1748 | 6.4061 | 2.3868 | 1.8011 | 38.5715 | 23.4905 | 37.8834 | 35.9936 | |
4 | 2.2214 | 11.2676 | 5.7276 | 3.3599 | 100.8143 | 28.1395 | 3704.5536 | 572.1345 | |
5 | 0.5851 | 2.7340 | 1.3340 | 0.8581 | 30.9868 | 21.4587 | 125.5700 | 151.6036 | |
6 | 1.0318 | 4.0402 | 1.3912 | 1.1269 | 22.3591 | 105.0573 | 165.3070 | 89.8998 | |
7 | 0.4319 | 1.8672 | 0.6975 | 0.5215 | 8.7144 | 14.0430 | 144.6582 | 18.4970 | |
8 | 1.4764 | 6.7578 | 2.5375 | 2.3804 | 29.9505 | 22.9731 | 241.0814 | 38.1197 | |
9 | 1.4103 | 8.2744 | 3.5525 | 2.5970 | 974.0334 | 117.4275 | 2401.4376 | 2206.5812 | |
Mean | 1.1812 | 5.1875 | 2.2826 | 1.6666 | 170.3550 | 40.2802 | 883.4745 | 905.7357 |
Operating System | Windows (Windows 10 21H1, x64) |
---|---|
MATLAB version | MATLAB R2020a |
CPU | Intel Core i7-10875H @ 2.30 GHz |
Memory | 16.0 GB |
Method | LCM | MPCM | RLCM | TLLCM | VAR_DIFF | ADMD | WSLCM | Proposed |
---|---|---|---|---|---|---|---|---|
Time (s) | 0.0274 | 0.0312 | 1.1384 | 0.3216 | 0.0068 | 0.0167 | 1.4780 | 0.0153 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zheng, Y.; Zhang, Y.; Ding, R.; Ma, C.; Li, X. Fast and Robust Infrared Small Target Detection Using Weighted Local Difference Variance Measure. Sensors 2023, 23, 2630. https://doi.org/10.3390/s23052630
Zheng Y, Zhang Y, Ding R, Ma C, Li X. Fast and Robust Infrared Small Target Detection Using Weighted Local Difference Variance Measure. Sensors. 2023; 23(5):2630. https://doi.org/10.3390/s23052630
Chicago/Turabian StyleZheng, Ying, Yuye Zhang, Ruichen Ding, Chunming Ma, and Xiuhong Li. 2023. "Fast and Robust Infrared Small Target Detection Using Weighted Local Difference Variance Measure" Sensors 23, no. 5: 2630. https://doi.org/10.3390/s23052630
APA StyleZheng, Y., Zhang, Y., Ding, R., Ma, C., & Li, X. (2023). Fast and Robust Infrared Small Target Detection Using Weighted Local Difference Variance Measure. Sensors, 23(5), 2630. https://doi.org/10.3390/s23052630