Target Classification in Synthetic Aperture Radar Images Using Quantized Wavelet Scattering Networks †
<p>Structure of a wavelet scattering network which computes the windowed scattering transform. Reprinted with permission from ref. [<a href="#B27-sensors-21-04981" class="html-bibr">27</a>]. Copyright © 2013 IEEE.</p> "> Figure 2
<p>WSN paths with <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>, <span class="html-italic">L</span> = 2, and <span class="html-italic">M</span> = 2.</p> "> Figure 3
<p>Location of the q-layers in an <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> WSN where <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p> "> Figure 4
<p>Example of a quantization scheme for a WSN with <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> at the input of each filter for all possible s-layers in the network: (<b>a</b>) filter scales <math display="inline"><semantics> <mrow> <msub> <mi>ψ</mi> <mi>λ</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mi>J</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) Log<sub>2</sub> subsampling rate <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>f</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) cumulative log<sub>2</sub> subsampling rate <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>=</mo> <mstyle displaystyle="true"> <mo stretchy="false">∑</mo> <mrow> <msub> <mi>d</mi> <mi>f</mi> </msub> </mrow> </mstyle> </mrow> </semantics></math>; (<b>d</b>) color key for filter scales.</p> "> Figure 4 Cont.
<p>Example of a quantization scheme for a WSN with <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> at the input of each filter for all possible s-layers in the network: (<b>a</b>) filter scales <math display="inline"><semantics> <mrow> <msub> <mi>ψ</mi> <mi>λ</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mi>J</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) Log<sub>2</sub> subsampling rate <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>f</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) cumulative log<sub>2</sub> subsampling rate <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>=</mo> <mstyle displaystyle="true"> <mo stretchy="false">∑</mo> <mrow> <msub> <mi>d</mi> <mi>f</mi> </msub> </mrow> </mstyle> </mrow> </semantics></math>; (<b>d</b>) color key for filter scales.</p> "> Figure 5
<p>Classes in the MSTAR dataset: (<b>a</b>) 2S1; (<b>b</b>) BRDM-2; (<b>c</b>) BTR-60; (<b>d</b>) D7; (<b>e</b>) SLICY; (<b>f</b>) T62; (<b>g</b>) ZIL-131; (<b>h</b>) ZSU-23-4.</p> "> Figure 6
<p>Histogram of MSTAR dataset wavelet coefficients separated by class for each s-layer in a WSN for <span class="html-italic">Q</span> = 1, <span class="html-italic">J</span> = 5, and <span class="html-italic">L</span> = 8: (<b>Top</b>) <span class="html-italic">M</span> = 1; (<b>Middle</b>) <span class="html-italic">M</span> = 2; (<b>Bottom</b>) fitted inverse Gaussian PDF.</p> "> Figure 7
<p>Accuracy of the networks implementing RNG-based quantization scales for 10 different initial seeds.</p> "> Figure 8
<p>Accuracy of the quantizer scales for SNR = ∞.</p> "> Figure 9
<p>Accuracy of the quantizer scales for infinite, 50, 20, 10, and 2 dB SNR.</p> ">
Abstract
:1. Introduction
2. Wavelet Scattering Networks Fundamentals
3. Quantization of a Wavelet Scattering Network
3.1. Sizes of Filter Outputs
3.2. Number of Filter Outputs per s-Layer
3.3. Quantization Scales
3.3.1. Uniform Scale
3.3.2. Log Scale
3.3.3. K-Means Scale
3.3.4. Probability Distribution Scale
3.3.5. Quantile Scale
4. Quantized Wavelet Scattering Network Results
4.1. Description of the MSTAR Dataset and Augmentations
4.2. Evaluation Metrics
5. Results and Discussion
5.1. Effects of RNG Seeding
5.2. Noiseless and Noisy Datasets
5.3. Comparison with the SVM and ResNet18
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Musman, S.; Kerr, D.; Bachmann, C. Automatic recognition of ISAR ship images. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1392–1404. [Google Scholar] [CrossRef]
- Aydogan, D.B.; Hannula, M.; Arola, T.; Dastidar, P.; Hyttinen, J. 2D texture based classification, segmentation and 3D orientation estimation of tissues using DT-CWT feature extraction methods. Data Knowl. Eng. 2009, 68, 1383–1397. [Google Scholar] [CrossRef]
- Jawahir, W.N.; Yussof, H.W.; Burkhardt, H. Relational features for texture classification. In Proceedings of the International Conference on Signal Processing, Image Processing and Pattern Recognition (SIP), Jeju Island, Korea, 28 November–2 December 2012; pp. 438–447. [Google Scholar]
- Srinivas, U.; Monga, V.; Raj, R.G. SAR ATR using discriminative graphical models. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 591–606. [Google Scholar] [CrossRef]
- Amoon, M.; Rezai-Rad, G.A. Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features. IET Comput. Vis. 2013, 8, 77–85. [Google Scholar] [CrossRef]
- McKay, J.; Monga, V.; Raj, R.G. Robust sonar ATR through Bayesian pose-corrected sparse classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5563–5576. [Google Scholar] [CrossRef] [Green Version]
- Olson, C.F.; Huttenlocher, D.P. Automatic target recognition by matching oriented edge pixels. IEEE Trans. Image Process. 1997, 6, 103–113. [Google Scholar] [CrossRef] [Green Version]
- Bhatnagar, V.; Shaw, A.; Williams, R.W. Improved automatic target recognition using singular value decomposition. In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seattle, WA, USA, 12–15 May 1998; pp. 2717–2720. [Google Scholar]
- Suvorova, S.; Schroeder, J. Automated target recognition using the Karhunen-Loeve transform with invariance. Digit. Signal. Process. 2002, 12, 295–306. [Google Scholar] [CrossRef]
- Jansen, R.W.; Sletten, M.A.; Ainsworth, T.L.; Raj, R.G. Multi-channel synthetic aperture radar based classification of maritime scenes. IEEE Access 2020, 8, 127440–127449. [Google Scholar] [CrossRef]
- Hall, D.L.; Ridder, T.D.; Narayanan, R.M. Abnormal gait detection and classification using micro-Doppler radar signatures. In Proceedings of the SPIE Conference on Radar Sensor Technology XXIII, Baltimore, MD, USA, 15–17 April 2019. [Google Scholar] [CrossRef]
- Rodenbeck, C.T.; Beun, J.; Raj, R.G.; Lipps, R.D. Vibrometry and sound reproduction of acoustic sources on moving platforms using millimeter wave pulse-Doppler radar. IEEE Access 2020, 8, 27676–27686. [Google Scholar] [CrossRef]
- Coleman, G.B.; Andrews, H.C. Image segmentation by clustering. Proc. IEEE 1979, 67, 773–785. [Google Scholar] [CrossRef]
- Haralick, R.M.; Shapiro, L.G. Image segmentation techniques. CVGIP 1985, 29, 100–132. [Google Scholar]
- Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Profeta, A.; Rodriguez, A.; Clouse, H.S. Convolutional neural networks for synthetic aperture radar classification. In Proceedings of the SPIE Conference on Algorithms for Synthetic Aperture Radar Imagery XXIII, Baltimore, MD, USA, 21 April 2016. [Google Scholar] [CrossRef]
- Soldin, R.J.; MacDonald, D.N.; Reisman, M.; Konz, L.R.; Rouse, R.; Overman, T.L. HySARNet: A Hybrid machine learning approach to Synthetic Aperture Radar automatic target recognition. In Proceedings of the SPIE Conference on Automatic Target Recognition XXIX, Baltimore, MD, USA, 15–18 April 2019. [Google Scholar] [CrossRef]
- Shao, J.; Qu, C.; Li, J. A performance analysis of convolutional neural network models in SAR target recognition. In Proceedings of the 2017 IEEE Conference on SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017. [Google Scholar] [CrossRef]
- Morgan, D.A.E. Deep convolutional neural networks for ATR from SAR imagery. In Proceedings of the SPIE Conference on Algorithms for Synthetic Aperture Radar Imagery XXII, Baltimore, MD, USA, 23 April 2015. [Google Scholar] [CrossRef]
- Cha, M.; Majumdar, A.; Kung, H.T.; Barber, J. Improving SAR automatic target recognition using simulated images under deep residual refinements. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2606–2610. [Google Scholar]
- Fox, M.R.; Narayanan, R.M. Application and performance of convolutional neural networks to SAR. In Proceedings of the SPIE Conference on Radar Sensor Technology XXII, Orlando, FL, USA, 16–18 April 2018. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskeyer, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Lin, D.D.; Talathi, S.S.; Annapureddy, V.S. Fixed point quantization of deep convolutional networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML’16), New York, NY, USA, 20–22 June 2016; pp. 2849–2858. [Google Scholar]
- Jacob, B.; Kligys, S.; Chen, B.; Zhu, M.; Tang, M.; Howard, A.; Adam, H.; Kalenichenko, D. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2704–2713. [Google Scholar]
- Bruna, J.; Mallat, S. Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1872–1886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mallat, S. Group invariant scattering. Commun. Pure Appl. Math. 2012, 65, 1331–1398. [Google Scholar] [CrossRef] [Green Version]
- Oyallon, E.; Mallat, S.; Sifre, L. Generic deep networks with wavelet scattering. arXiv 2014, arXiv:1312.5940. [Google Scholar]
- Soro, B.; Lee, C. A wavelet scattering feature extraction approach for deep neural network based indoor fingerprinting localization. Sensors 2019, 19, 1790. [Google Scholar] [CrossRef] [Green Version]
- Szu, H.H. Why adaptive wavelet transform? In Proceedings of the SPIE Conference on Visual Information Processing II, Orlando, FL, USA, 14–16 April 1993; pp. 280–292. [Google Scholar]
- Xiao, Q.; Ge, G.; Wang, J. The neural network adaptive filter model based on wavelet transform. In Proceedings of the IEEE 2009 Ninth International Conference on Hybrid Intelligent Systems, Shenyang, China, 12–14 August 2009; pp. 529–534. [Google Scholar]
- Xiong, H.; Zhang, T.; Moon, Y.S. A translation- and scale-invariant adaptive wavelet transform. IEEE Trans. Image Process. 2000, 9, 2100–2108. [Google Scholar]
- Nadella, S.; Singh, A.; Omkar, S.N. Aerial scene understanding using deep wavelet scattering network and conditional random field. In Proceedings of the 4th European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 205–214. [Google Scholar]
- Singh, A.; Kingsbury, N. Dual-tree wavelet scattering network with parametric log transformation for object classification. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2622–2626. [Google Scholar]
- Wu, J.; Jiang, L.; Han, X.; Senhadji, L.; Shu, H. Performance evaluation of wavelet scattering network in image texture classification in various color spaces. arXiv 2014, arXiv:1407.6423. [Google Scholar]
- Li, B.H.; Zhang, J.; Zheng, W.S. HEp-2 cells staining patterns classification via wavelet scattering network and random forest. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 406–410. [Google Scholar]
- Khan, A.A.; Dhawan, A.; Akhlaghi, N.; Majdi, J.A.; Sikdar, S. Application of wavelet scattering networks in classification of ultrasound image sequences. In Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), Washington, DC, USA, 6–9 September 2017. [Google Scholar] [CrossRef]
- Shi, X.; Zhou, F.; Yang, S.; Zhang, Z.; Su, T. Automatic target recognition for synthetic aperture radar images based on super-resolution generative adversarial network and deep convolutional neural network. Remote Sens. 2019, 11, 135. [Google Scholar] [CrossRef] [Green Version]
- Rodriguez, R.; Dokladalova, E.; Dokladal, P. Rotation invariant CNN using scattering transform for image classification. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 654–658. [Google Scholar]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–356. [Google Scholar] [CrossRef]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary robust independent elementary features. Lect. Notes Comput. Sci. 2010, 6314, 778–792. [Google Scholar]
- Popa, C.A. Complex-valued convolutional neural networks for real-valued image classification. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 816–822. [Google Scholar]
- Fox, M.R.; Raj, R.G.; Narayanan, R.M. Quantized wavelet scattering networks for signal classification. In Proceedings of the SPIE Conference on Radar Sensor Technology XXIII, Baltimore, MD, USA, 15–18 April 2019. [Google Scholar] [CrossRef]
- Andén, J.; Sifre, L.; Mallat, S.; Kapoko, M.; Lostanlen, V.; Oyallon, E. Scatnet. Available online: http://www.di.ens.fr/data/software/scatnet (accessed on 28 April 2020).
- Frazier, M.; Jawerth, B.; Weiss, G. Littlewood-Paley Theory and the Study of Function Spaces, 1st ed.; American Mathematical Society: Providence, RI, USA, 1991; pp. 42–49. [Google Scholar]
- Myers, Y. Wavelet and Operators: Volume 1, 1st ed.; Cambridge University Press: Cambridge, UK, 1995; pp. 18–65. [Google Scholar]
- Fox, M.R. Quantization and Adaptivity of Wavelet Scattering Networks for Classification Purposes. Master’s Thesis, The Pennsylvania State University, University Park, PA, USA, May 2020. [Google Scholar]
- Arthur, D.; Vassilvitskii, S. K-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’07), New Orleans, LA, USA, 7–9 January 2007; pp. 1027–1035. [Google Scholar]
- Michael, J.R.; Schucany, W.R.; Haas, R.W. Generating random variates using transformations with multiple roots. Am. Stat. 1976, 30, 88–90. [Google Scholar]
- Marsaglia, G.; Tsang, W.W. A simple method for generating gamma variables. ACM Trans. Math. Softw. 2000, 26, 363–372. [Google Scholar] [CrossRef]
- Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. The balanced accuracy and its posterior distribution. In Proceedings of the IEEE 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 3121–3124. [Google Scholar]
M | J | L | Number of Paths |
---|---|---|---|
0 | - | - | 1 |
1 | 3 | 1 | 4 |
1 | 3 | 2 | 7 |
1 | 3 | 2 | 25 |
1 | 5 | 1 | 6 |
1 | 5 | 2 | 11 |
1 | 5 | 8 | 41 |
2 | 3 | 1 | 7 |
2 | 3 | 2 | 19 |
2 | 5 | 1 | 16 |
2 | 5 | 2 | 51 |
Parameter | Value |
---|---|
σϕ | 0.8 |
σψ | 0.8 |
S | 0.5 |
ξ | 2.356 |
SNR (dB) | WSN-SVM | SVM | ResNet18 |
---|---|---|---|
∞ | 0.974 ± 0.0071 | 0.973 ± 0.011 | 0.818 ± 0.0098 |
50 | 0.973 ± 0.0053 | 0.972 ± 0.011 | 0.819 ± 0.011 |
20 | 0.871 ± 0.013 | 0.835 ± 0.021 | 0.799 ± 0.016 |
10 | 0.681 ± 0.020 | 0.659 ± 0.024 | 0.774 ± 0.027 |
2 | 0.557 ± 0.015 | 0.571 ± 0.023 | 0.754 ± 0.024 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Raj, R.G.; Fox, M.R.; Narayanan, R.M. Target Classification in Synthetic Aperture Radar Images Using Quantized Wavelet Scattering Networks. Sensors 2021, 21, 4981. https://doi.org/10.3390/s21154981
Raj RG, Fox MR, Narayanan RM. Target Classification in Synthetic Aperture Radar Images Using Quantized Wavelet Scattering Networks. Sensors. 2021; 21(15):4981. https://doi.org/10.3390/s21154981
Chicago/Turabian StyleRaj, Raghu G., Maxine R. Fox, and Ram M. Narayanan. 2021. "Target Classification in Synthetic Aperture Radar Images Using Quantized Wavelet Scattering Networks" Sensors 21, no. 15: 4981. https://doi.org/10.3390/s21154981
APA StyleRaj, R. G., Fox, M. R., & Narayanan, R. M. (2021). Target Classification in Synthetic Aperture Radar Images Using Quantized Wavelet Scattering Networks. Sensors, 21(15), 4981. https://doi.org/10.3390/s21154981