Lightweight CFARNets for Landmine Detection in Ultrawideband SAR
<p>Demonstration of the sliding window mode in CA-CFAR.</p> "> Figure 2
<p>A systematic view of the CA operation. (<b>a</b>) Convolution-based implementation; (<b>b</b>) mean pooling-based implementation.</p> "> Figure 3
<p>A systematic view of the CFAR filter.</p> "> Figure 4
<p>CFAR blocks. (<b>a</b>) Single branch CFAR Block; (<b>b</b>) Inception CFAR block I; (<b>c</b>) Inception CFAR block II.</p> "> Figure 5
<p>Proposed CFAR block-based network architectures. A 48 × 48 single-channel SAR image is taken as an example.</p> "> Figure 6
<p>Two-stage target detection framework.</p> "> Figure 7
<p>Multi-crop classification.</p> "> Figure 8
<p>Collected SAR images with landmines. The sizes of images are: (<b>a</b>) 3751 × 5002; (<b>b</b>) 5001 × 5001; (<b>c</b>) 3166 × 2643; (<b>d</b>) 5001 × 5001.</p> "> Figure 9
<p>Examples of image patches: (<b>a</b>–<b>d</b>) Local part of minefield.</p> "> Figure 10
<p>Processing flow of the experiment.</p> "> Figure 11
<p>Landmine detection results of CFAR-A-CFARNet. (<b>a</b>–<b>f</b>) Local part of minefield. The rectangles indicate the detection results: Green rectangle, detected target; Red rectangle, false alarm; Yellow rectangle, missed target.</p> "> Figure 12
<p>Speed–accuracy curve of the detectors.</p> "> Figure 13
<p>Detection metrics with different receptive fields.</p> "> Figure 14
<p>Detection metrics with different probabilities of false alarms. (<b>a</b>) CFAR; (<b>b</b>) CFAR-A-CFARNet; (<b>c</b>) CFAR-B-CFARNet; (<b>d</b>) CFAR-C-CFARNet.</p> "> Figure 15
<p>Examples of image patches. (<b>a</b>) landmine; (<b>b</b>) clutter.</p> "> Figure 16
<p>Feature maps of the landmine image. (<b>a</b>) stage 1; (<b>b</b>) stage 2; (<b>c</b>) stage 3; (<b>d</b>) stage 4.</p> "> Figure 17
<p>Feature maps of the clutter image. (<b>a</b>) stage 1; (<b>b</b>) stage 2; (<b>c</b>) stage 3; (<b>d</b>) stage 4.</p> "> Figure 17 Cont.
<p>Feature maps of the clutter image. (<b>a</b>) stage 1; (<b>b</b>) stage 2; (<b>c</b>) stage 3; (<b>d</b>) stage 4.</p> ">
Abstract
:1. Introduction
1.1. Traditional Detection Methods
1.2. Deep Learning-Based Methods
1.3. Contribution of This Work
- (1)
- It is found that cell averaging-CFAR (CA-CFAR) can be implemented via mean pooling, which not only accelerates the computation speed but also makes it possible to be used as an interpretable network module. Based on this finding, we propose a CFAR filter, a new class of filter that can capture the divergence of SAR images.
- (2)
- We propose CFAR blocks, consisting of CFAR filters and other nonlinear filters, which can replace standard convolutional layers. We also propose lightweight CFARNets based on the developed CFAR blocks, which have low complexity and few parameters.
- (3)
- A two-stage landmine detection method based on CFARNets is proposed. Since CFARNets are developed based on the CFAR filter which has definite physical significance, they can efficiently utilize the SAR characteristics and are interpretable.
- (4)
- Experiments on landmines are carried out. The performance of the proposed detection method based on CFARNet is comparable to YOLO detectors but with higher inference speed.
2. CFAR Filter Based on CA-CFAR
2.1. Review of CA-CFAR
2.2. CFAR Filter
3. CFAR Blocks and Deep CFAR Networks
3.1. CFAR Blocks
- (1)
- Single-branch CFAR (S-CFAR) Block
- (2)
- Multiple-branch CFAR Block
3.2. Deep CFAR Networks
4. Two-Stage Target Detection Based on Deep CFAR Networks
4.1. Two-Stage Detection Framework
4.2. CFAR-Guided Region Proposals
4.3. Multi-Crop Classification
- (1)
- Eager mode. A region proposal is judged as a target if one of the three crops is a target, i.e., the maximum target probability is larger than 0.5.
- (2)
- Steady mode. A region proposal is judged as a target if the mean target probability of the three crops is larger than 0.5.
5. Experimental Results
5.1. Dataset
5.2. Experimental Setup
- (1)
- Randomly crop a rectangular region whose aspect ratio is randomly sampled in [3/4, 4/3] and area randomly sampled in [80%, 100%], then resize the cropped region into a 48-by-48 square image.
- (2)
- Flip horizontally with 0.5 probability.
- (3)
- Scale brightness with coefficients uniformly drawn from [0.6, 1.4].
- (4)
- Normalize the gray image by subtracting 0.5.
5.3. Detection Results
- (1)
- The proposed two-stage detection method
- (2)
- Comparison with other detectors
- (3)
- Influence of receptive field
- (4)
- Influence of CFAR parameter
5.4. Feature Analysis
6. Discussion
- (1)
- Just using the nonlinearity of 1 × 1 convolution layer is insufficient to build a good network model. Other filters are essential.
- (2)
- By combining the CFAR filters and the 1 × 1 convolution layer, the image’s multi-dimensional divergence features are extracted layer-by-layer, and we can obtain a high-performance network for SAR landmine detection.
- (3)
- Compared to other real-time state-of-the-art detectors, the proposed CFARNets have comparable performance in terms of F1 score, with a significant reduction in the number of parameters and flops.
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix A.1. TinyResNet-18
Layer Name | Output Size | Parameter Setting |
---|---|---|
conv1 | 48 × 48 | 3 × 3, 64, stride 1 |
conv2_x | 24 × 24 | 3 × 3 max pool, stride 2 |
conv3_x | 12 × 12 | |
conv4_x | 6 × 6 | |
conv5_x | 3 × 3 | |
classifier | 1 × 1 | average pool, 2-d fc, softmax |
Appendix A.2. A-ConvNets48
Layer Name | Output Size | Parameter Setting |
---|---|---|
conv1 | 48 × 48 | 5 × 5, 16 |
maxpool1 | 24 × 24 | 2 × 2 |
conv2 | 24 × 24 | 5 × 5, 32 |
maxpool2 | 12 × 12 | 2 × 2 |
conv3 | 12 × 12 | 6 × 6, 64 |
maxpool3 | 6 × 6 | 2 × 2 |
conv4 | 3 × 3 | 4 × 4, 128 |
conv5 | 1 × 1 | 3 × 3, 2 |
Appendix A.3. Conv1x1Net
References
- Song, X.; Xiang, D.; Zhou, K.; Su, Y. Fast prescreening for gpr antipersonnel mine detection via go decomposition. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 15–19. [Google Scholar] [CrossRef]
- Temlioglu, E.; Erer, I. A novel convolutional autoencoder-based clutter removal method for buried threat detection in ground-penetrating radar. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
- Jin, T.; Zhou, Z. Feature extraction and discriminator design for landmine detection on double-hump signature in ul-trawideband sar. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3783–3791. [Google Scholar] [CrossRef]
- Lou, J.; Jin, T.; Liang, F.; Zhou, Z. A novel prescreening method for land-mine detection in UWB SAR based on feature point matching. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3706–3714. [Google Scholar] [CrossRef]
- Zhang, X.; Yang, P.; Zhou, M. Multireceiver SAS imagery with generalized PCA. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
- Hu, X.; Xie, H.; Zhang, L.; Hu, J.; He, J.; Yi, S.; Jiang, H.; Xie, K. Fast Factorized Backprojection Algorithm in Orthogonal Elliptical Coordinate System for Ocean Scenes Imaging Using Geosynchronous Spaceborne–Airborne VHF UWB Bistatic SAR. Remote Sens. 2023, 15, 2215. [Google Scholar] [CrossRef]
- Chen, S.; Wang, H.; Xu, F.; Jin, Y.-Q. Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
- Wagner, S.A. SAR ATR by a combination of convolutional neural network and support vector machines. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2861–2872. [Google Scholar] [CrossRef]
- Yang, H.; Zhang, T.; He, Y.; Dan, Y.; Yin, J.; Ma, B.; Yang, J. GPU-oriented designs of constant false alarm rate detectors for fast target detection in radar images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Yu, W.; Wang, Y.; Liu, H.; He, J. Superpixel-based CFAR target detection for high-resolution SAR images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 730–734. [Google Scholar] [CrossRef]
- Pappas, O.; Achim, A.; Bull, D. Superpixel-level CFAR detectors for ship detection in SAR imagery. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1397–1401. [Google Scholar] [CrossRef]
- Li, M.-D.; Cui, X.-C.; Chen, S.-W. Adaptive superpixel-level CFAR detector for SAR inshore dense ship detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Chabbi, S.; Farah, F.; Guidoum, N. CFAR-CNN detector of ships from SAR image using generalized gamma distribution and real dataset. In Proceedings of the 2022 7th International Conference on Image and Signal Processing and Their Applications (ISPA), Mostaganem, Algeria, 8–9 May 2022; pp. 1–6. [Google Scholar]
- Tang, T.; Wang, Y.; Liu, H.; Zou, S. CFAR-guided dual-stream single-shot multibox detector for vehicle detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Torrione, P.A.; Morton, K.D.; Sakaguchi, R.; Collins, L.M. Histograms of oriented gradients for landmine detection in ground-penetrating radar data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1539–1550. [Google Scholar] [CrossRef]
- Kwak, Y.; Song, W.-J.; Kim, S.-E. Speckle-noise-invariant convolutional neural network for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2019, 16, 549–553. [Google Scholar] [CrossRef]
- Wang, N.; Wang, Y.; Liu, H.; Zuo, Q.; He, J. Feature-fused SAR target discrimination using multiple convolutional neural Networks. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1695–1699. [Google Scholar] [CrossRef]
- Cui, J.; Jia, H.; Wang, H.; Xu, F. A fast threshold neural network for ship detection in large-scene SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6016–6032. [Google Scholar] [CrossRef]
- Wang, Z.; Du, L.; Mao, J.; Liu, B.; Yang, D. SAR target detection based on SSD with data augmentation and transfer learning. IEEE Geosci. Remote Sens. Lett. 2019, 16, 150–154. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, L.; Xiong, B.; Kuang, G. Attention receptive pyramid network for ship detection in SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2738–2756. [Google Scholar] [CrossRef]
- Lang, P.; Fu, X.; Feng, C.; Dong, J.; Qin, R.; Martorella, M. Lw-cmdanet: A novel attention network for SAR automatic target recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6615–6630. [Google Scholar] [CrossRef]
- Tai, Y.; Tan, Y.; Xiong, S.; Sun, Z.; Tian, J. Few-Shot transfer learning for SAR image classification without extra SAR samples. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2240–2253. [Google Scholar] [CrossRef]
- Zhang, W.; Zhu, Y.; Fu, Q. Semi-Supervised deep transfer learning-based on adversarial feature learning for label limited SAR target recognition. IEEE Access 2019, 7, 152412–152420. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Ding, X.; Guo, Y.; Ding, G.; Han, J. ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Gao, G.; Kuang, G.; Zhang, Q.; Li, D. Fast detecting and locating groups of targets in high-resolution SAR images. Pattern Recognit. 2007, 40, 1378–1384. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- Lyu, C.; Zhang, W.; Huang, H.; Zhou, Y.; Wang, Y.; Liu, Y.; Zhang, S.; Chen, K. RTMDet: An Empirical Study of Designing Real-Time Object Detectors. arXiv 2022, arXiv:2212.07784. [Google Scholar]
Stages | A-CFARNet | B-CFARNet | C-CFARNet |
---|---|---|---|
stage 1 | S-CFAR | IN-CFAR-I || | IN-CFAR-II || |
stage 2 | S-CFAR | IN-CFAR-I || | IN-CFAR-II || |
stage 3 | S-CFAR | IN-CFAR-I || | IN-CFAR-II || |
stage 4 | S-CFAR | IN-CFAR-I || | IN-CFAR-II || |
Dataset | Train | Test |
---|---|---|
NO. of images | 55 | 6 |
NO. of landmines | 297 | 58 |
Method | Recall (%) | Precision (%) | F1 (%) |
---|---|---|---|
Two-parameter CA-CFAR | 98.28 | 53.27 | 69.09 |
CFAR-Conv1x1Net | 51.72 | 62.50 | 56.60 |
CFAR-A-ConvNets48 | 98.28 | 71.25 | 82.61 |
CFAR-TinyResNet-18 | 96.55 | 72.73 | 82.96 |
CFAR-A-CFARNet | 94.83 | 75.34 | 83.97 |
CFAR-B-CFARNet | 93.10 | 75.00 | 83.08 |
CFAR-C-CFARNet | 91.38 | 81.54 | 86.18 |
Method | Recall (%) | Precision (%) | F1 (%) |
---|---|---|---|
CFAR-Conv1x1Net | 63.79 | 56.06 | 59.68 |
CFAR-A-ConvNets48 | 98.28 | 65.52 | 78.62 |
CFAR-TinyResNet-18 | 98.28 | 70.37 | 82.01 |
CFAR-A-CFARNet | 96.55 | 72.73 | 82.96 |
CFAR-B-CFARNet | 96.55 | 70.89 | 81.75 |
CFAR-C-CFARNet | 91.38 | 76.81 | 83.46 |
Method | Recall (%) | Precision (%) | F1 (%) |
---|---|---|---|
CFAR-Conv1x1Net | 44.83 | 60.47 | 51.49 |
CFAR-A-ConvNets48 | 98.28 | 73.08 | 83.82 |
CFAR-TinyResNet-18 | 96.55 | 76.71 | 85.50 |
CFAR-A-CFARNet | 93.10 | 83.08 | 87.80 |
CFAR-B-CFARNet | 96.55 | 75.68 | 84.85 |
CFAR-C-CFARNet | 89.66 | 82.54 | 85.95 |
Method | Recall (%) | Precision (%) | F1 (%) | Latency |
---|---|---|---|---|
Faster R-CNN | 96.55 | 81.16 | 88.19 | 165.6 |
YOLOv3 | 98.28 | 77.03 | 86.36 | 24.3 |
YOLOX-S | 94.83 | 68.75 | 79.71 | 23.5 |
YOLOX-Tiny | 68.97 | 78.43 | 73.39 | 22.2 |
RTMDet-S | 93.10 | 75.00 | 83.08 | 25.2 |
RTMDet-Tiny | 91.38 | 69.74 | 79.10 | 23.4 |
Model | #Params (M) | FLOPs (G) |
---|---|---|
Conv1x1Net | 0.162 | 0.003 |
A-ConvNets48 | 1.371 | 0.023 |
TinyResNet-18 | 11.681 | 0.315 |
A-CFARNet | 0.162 | 0.004 |
B-CFARNet | 0.143 | 0.002 |
C-CFARNet | 0.143 | 0.002 |
Faster R-CNN | 32.963 | 758 |
YOLOv3 | 61.524 | 48.534 |
YOLOX-S | 8.938 | 8.524 |
YOLOX-Tiny | 5.033 | 4.845 |
RTMDet-S | 8.856 | 9.440 |
RTMDet-Tiny | 4.873 | 5.136 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Song, Y.; Jin, T. Lightweight CFARNets for Landmine Detection in Ultrawideband SAR. Remote Sens. 2023, 15, 4411. https://doi.org/10.3390/rs15184411
Zhang Y, Song Y, Jin T. Lightweight CFARNets for Landmine Detection in Ultrawideband SAR. Remote Sensing. 2023; 15(18):4411. https://doi.org/10.3390/rs15184411
Chicago/Turabian StyleZhang, Yansong, Yongping Song, and Tian Jin. 2023. "Lightweight CFARNets for Landmine Detection in Ultrawideband SAR" Remote Sensing 15, no. 18: 4411. https://doi.org/10.3390/rs15184411
APA StyleZhang, Y., Song, Y., & Jin, T. (2023). Lightweight CFARNets for Landmine Detection in Ultrawideband SAR. Remote Sensing, 15(18), 4411. https://doi.org/10.3390/rs15184411