Collaborative Representation Using Non-Negative Samples for Image Classification
<p>The pipeline of NCRC. In the first step, <math display="inline"><semantics> <msub> <mi>l</mi> <mn>2</mn> </msub> </semantics></math> normalization is performed on all samples. Next, we calculate the collaborative representation coefficients by representing the test samples collaboratively via <math display="inline"><semantics> <msub> <mi>l</mi> <mn>2</mn> </msub> </semantics></math> regularization using all training samples. In the third step, the ReLU function is utilized to filter collaborative coefficients and to map the negative ones to zero. Afterwards, we use the newly mapped collaborative coefficients to represent the test sample. In the last step, the nearest subspace classification is performed to classify each test sample.</p> "> Figure 2
<p>Coefficients of CRC (<b>left</b>) and NCRC (<b>right</b>) on an image from the AR database. Values on the horizontal axis represent the index of a sample in the training set. Values on the vertical axis represent the coefficient value. Samples (<b>left</b>) with negative (<0) coefficients are enclosed in a dotted red rectangle, indicating several coefficients still contain many negative values. However, these negative coefficients are non-existent on the right.</p> "> Figure 3
<p>Image samples from the AR face database.</p> "> Figure 4
<p>Image samples from the LFW face database.</p> "> Figure 5
<p>Image samples from the MUCT face database.</p> "> Figure 6
<p>Image samples from the PolyU palmprint database.</p> "> Figure 7
<p>Accuracy vs. increasing the number of training samples on the AR database.</p> "> Figure 8
<p>Accuracy vs. increasing the number of training samples on the LFW database.</p> "> Figure 9
<p>Accuracy vs. increasing the number of training samples on the MUCT database.</p> "> Figure 10
<p>Comparison of the classification times using different databases with: MUCT (<b>left</b>); and PolyU (<b>right</b>).</p> ">
Abstract
:1. Introduction
- We propose a novel image classification algorithm using non-negative samples based on the collaborative representation based classifier.
- The proposed method enhances the sparsity of CRC by introducing the Restricted Linear Unit (ReLU) function, which increases the sparsity of the coefficients and improves the recognition rate.
2. CRC and Non-Negative CRC
2.1. Collaborative Representation-Based Classifier
Algorithm 1 Collaborative representation based classifier. | ||
1: | Normalize to have norm | |
2: | Calculate the collaborative representation coefficients vector using | |
3: | Calculate the the residuals between test sample and the representation of each class using formulation and obtain the identity of using Equation (3): | |
2.2. Non-Negative Collaborative Representation Classifier
Algorithm 2 Non-negative collaborative representation based classifier. | ||
1: | Normalize to have norm | |
2: | Calculate the collaborative representation coefficients vector using formulation (1) | |
3: | Use ReLU function described in Equation (5) to map each collaborative representations to non-negative representation | |
4: | Represent the test sample using non-negative representations described as Equation (6) | |
5: | Calculate the the residuals between test sample and representation of each class using Equation (7) | |
3. Experiments
3.1. Dataset Description
3.2. Experiments on the AR Face Database
3.3. Experiments on the LFW Face Database
3.4. Experiments on the MUCT Face Database
3.5. Experiments on the PolyU Palmprint Database
3.6. Comparison of Classification Time
3.7. Discussion
- For face recognition, NCRC tends to achieve better results when the number of training samples is increased compared with SRC and CRC. Here, the highest improvement reaches 17.3% on the LFW database. Furthermore, NCRC (AR: 93.06%; LFW: 55.32%; and MUCT: 77.78%) is even more effective in terms of accuracy than S*CRC (AR: 82.50%; LFW: 50.25%; and MUCT: 76.85%) and ProCRC (AR: 91.81%; LFW: 51.66%; and MUCT: 74.32%), which are refined classifiers based on SRC (AR: 82.50%; LFW: 38.02%; and MUCT: 77.07%) and CRC (AR: 91.94%; LFW: 52.00%; and MUCT: 75.86%).
- When it comes to palmprint recognition, NCRC (95.04%) also shows competitive recognition rates on average, reaching its highest improvement of 1.5% over other state-of-the-art sparse representation methods such as SRC (95.03%), CRC (94.91%), S*CRC (94.92%), and ProCRC (93.54%). This indicates that our proposed method is not only effective in face recognition, but also other image classification tasks.
- Besides the recognition rate, NCRC (MUCT: 12.1 ms) consumed less time in classification compared with SRC (MUCT: 50.9 ms), CRC (MUCT: 16.0 ms) and S*CRC (MUCT: 113 ms) implying its efficiency in image classification. Although ProCRC performed faster than NCRC, in terms of the recognition rate, the proposed method outperformed all classifiers on average.
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
SRC | Sparse representation-based classifier |
CRC | Collaborative representation-based classifier |
ProCRC | Probabilistic collaborative representation-based classifier |
NCRC | Non-negative collaborative representation-based classifier |
KNN | K nearest neighbor classifier |
SVM | Support vector machine |
ReLU | Restricted Linear Unit |
References
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation: Lake Tahoe, CA, USA, 2012; pp. 1097–1105. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Wright, J.; Ma, Y.; Mairal, J.; Sapiro, G.; Huang, T.S.; Yan, S. Sparse representation for computer vision and pattern recognition. Proc. IEEE 2010, 98, 1031–1044. [Google Scholar] [CrossRef]
- Huang, K.; Aviyente, S. Sparse representation for signal classification. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation: Vancouver, BC, Canada, 2007; pp. 609–616. [Google Scholar]
- Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
- Xu, Y.; Zhang, D.; Yang, J.; Yang, J.Y. A two-phase test sample sparse representation method for use with face recognition. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 1255–1262. [Google Scholar]
- Fan, Z.; Ni, M.; Zhu, Q.; Liu, E. Weighted sparse representation for face recognition. Neurocomputing 2015, 151, 304–309. [Google Scholar] [CrossRef]
- Chang, L.; Yang, J.; Li, S.; Xu, H.; Liu, K.; Huang, C. Face recognition based on stacked convolutional autoencoder and sparse representation. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–4. [Google Scholar] [CrossRef]
- Shu, T.; Zhang, B.; Tang, Y. Novel noninvasive brain disease detection system using a facial image sensor. Sensors 2017, 17, 2843. [Google Scholar] [CrossRef] [PubMed]
- Zhao, S.; Zhang, B.; Chen, C.P. Joint deep convolutional feature representation for hyperspectral palmprint recognition. Inf. Sci. 2019, 489, 167–181. [Google Scholar] [CrossRef]
- Jin, W.; Gong, F.; Zeng, X.; Fu, R. Classification of clouds in satellite imagery using adaptive fuzzy sparse representation. Sensors 2016, 16, 2153. [Google Scholar] [CrossRef] [PubMed]
- Shi, L.; Song, X.; Zhang, T.; Zhu, Y. Histogram-based CRC for 3D-aided pose-invariant face recognition. Sensors 2019, 19, 759. [Google Scholar] [CrossRef] [PubMed]
- Timofte, R.; Van Gool, L. Weighted collaborative representation and classification of images. In Proceedings of the IEEE 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 1606–1610. [Google Scholar]
- Akhtar, N.; Shafait, F.; Mian, A. Efficient classification with sparsity augmented collaborative representation. Pattern Recognit. 2017, 65, 136–145. [Google Scholar] [CrossRef]
- Zeng, S.; Gou, J.; Yang, X. Improving sparsity of coefficients for robust sparse and collaborative representation- based image classification. Neural Comput. Appl. 2018, 30, 2965–2978. [Google Scholar] [CrossRef]
- Deng, W.; Hu, J.; Guo, J. In defense of sparsity based face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 25–27 June 2013; pp. 399–406. [Google Scholar]
- Cai, S.; Zhang, L.; Zuo, W.; Feng, X. A probabilistic collaborative representation based approach for pattern classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2950–2959. [Google Scholar]
- Dong, X.; Zhang, H.; Zhu, L.; Wan, W.; Wang, Z.; Wang, Q.; Guo, P.; Ji, H.; Sun, J. Weighted locality collaborative representation based on sparse subspace. J. Vis. Commun. Image Represent. 2019, 58, 187–194. [Google Scholar] [CrossRef]
- Zeng, S.; Yang, X.; Gou, J. Multiplication fusion of sparse and collaborative representation for robust face recognition. Multimed. Tools Appl. 2017, 76, 20889–20907. [Google Scholar] [CrossRef]
- Zheng, C.; Wang, N. Collaborative representation with k-nearest classes for classification. Pattern Recognit. Lett. 2019, 117, 30–36. [Google Scholar] [CrossRef]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Zeng, S.; Zhang, B.; Zhang, Y.; Gou, J. Collaboratively weighting deep and classic representation via l2 regularization for image classification. In Proceedings of the Asian Conference on Machine Learning, Beijing, China, 14–16 November 2018; pp. 502–517. [Google Scholar]
- Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the IEEE 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
- Zhu, P.; Zhang, L.; Hu, Q.; Shiu, S.C. Multi-scale patch based collaborative representation for face recognition with margin distribution optimization. In European Conference on Computer Vision; Springer: Berlin, Germany, 2012; pp. 822–835. [Google Scholar]
- Zhu, P.; Zuo, W.; Zhang, L.; Shiu, S.C.K.; Zhang, D. Image set-based collaborative representation for face recognition. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1120–1132. [Google Scholar]
- Song, X.; Chen, Y.; Feng, Z.H.; Hu, G.; Zhang, T.; Wu, X.J. Collaborative representation based face classification exploiting block weighted LBP and analysis dictionary learning. Pattern Recognit. 2019, 88, 127–138. [Google Scholar] [CrossRef]
- Martinez, A.M.; Benavente, R. The AR face database. In CVC Technical Report24; June 1998; Available online: http://www.cat.uab.cat/Public/Publications/1998/MaB1998/CVCReport24.pdf (accessed on 6 June 2019).
- Learned-Miller, E.; Huang, G.B.; RoyChowdhury, A.; Li, H.; Hua, G. Labeled faces in the wild: A survey. In Advances in Face Detection and Facial Image Analysis; Springer: Berlin, Germany, 2016; pp. 189–248. [Google Scholar]
- Milborrow, S.; Morkel, J.; Nicolls, F. The MUCT landmarked face database. Pattern Recognit. Assoc. S. Afr. 2010, 179–184. [Google Scholar]
- PolyU. PolyU Multispectral Palmprint Database; The Hong Kong Polytechnic University: Hong Kong, China, 2003. [Google Scholar]
- Aggarwal, C.C. Data Mining: The Textbook; Springer: Berlin, Germany, 2015. [Google Scholar]
Database | Classes | Samples | Image Size | Dimension |
---|---|---|---|---|
AR | 120 | 3120 | 2-D | |
LFW | 86 | 1251 | 2-D | |
MUCT | 276 | 3755 | 2-D | |
PolyU Palmprint | 386 | 7752 | 2-D |
Samples | SRC | CRC | S*CRC | ProCRC | SVM | KNN | NCRC |
---|---|---|---|---|---|---|---|
4 | 63.94 | 66.59 | 68.40 | 67.72 | 56.47 | 45.57 | 67.73 |
8 | 62.36 | 71.06 | 70.28 | 70.19 | 53.33 | 50.83 | 72.78 |
12 | 72.44 | 75.18 | 74.34 | 77.62 | 45.95 | 53.45 | 77.20 |
16 | 86.83 | 91.83 | 85.91 | 90.08 | 51.25 | 64.50 | 92.67 |
20 | 82.50 | 91.94 | 82.50 | 91.81 | 54.86 | 65.00 | 93.06 |
0.001 | 0.01 | 0.1 | 0.2 | 0.3 | 0.4 | |
Accuracy | 92.50 | 93.06 | 91.94 | 91.67 | 90.56 | 90.42 |
Samples | SRC | CRC | S*CRC | ProCRC | SVM | KNN | NCRC |
---|---|---|---|---|---|---|---|
5 | 24.32 | 27.37 | 26.02 | 27.93 | 21.83 | 15.80 | 27.54 |
10 | 30.41 | 35.24 | 34.88 | 35.78 | 23.20 | 19.02 | 37.15 |
15 | 32.36 | 43.36 | 41.53 | 41.53 | 25.85 | 23.32 | 44.62 |
20 | 34.03 | 47.61 | 45.33 | 46.07 | 28.04 | 27.17 | 50.64 |
25 | 35.63 | 49.35 | 46.98 | 51.66 | 26.94 | 27.51 | 52.44 |
30 | 36.78 | 49.58 | 49.58 | 51.66 | 28.14 | 26.91 | 53.20 |
35 | 38.02 | 52.00 | 50.25 | 51.66 | 28.12 | 27.29 | 55.32 |
0.001 | 0.01 | 0.1 | 0.2 | 0.3 | 0.4 | |
Accuracy | 42.1 | 55.07 | 55.32 | 51.33 | 48.25 | 46.67 |
Samples | SRC | CRC | S*CRC | ProCRC | SVM | KNN | NCRC |
---|---|---|---|---|---|---|---|
1 | 26.58 | 29.26 | 30.30 | 29.26 | 20.06 | 5.92 | 29.72 |
2 | 36.84 | 41.21 | 42.00 | 41.21 | 21.91 | 10.90 | 43.65 |
3 | 42.77 | 45.75 | 46.67 | 45.75 | 20.53 | 12.70 | 46.84 |
4 | 54.55 | 55.60 | 58.66 | 55.60 | 20.11 | 15.50 | 58.28 |
5 | 67.58 | 68.76 | 70.99 | 66.95 | 20.59 | 18.65 | 71.16 |
6 | 74.23 | 73.89 | 74.46 | 72.84 | 24.54 | 24.58 | 75.99 |
7 | 77.07 | 75.86 | 76.85 | 74.32 | 25.56 | 27.87 | 77.78 |
0.001 | 0.01 | 0.1 | 0.2 | 0.3 | 0.4 | |
Accuracy | 74.49 | 77.78 | 74.60 | 71.80 | 69.12 | 67.2 |
SRC | CRC | S*CRC | ProCRC | SVM | KNN | NCRC |
---|---|---|---|---|---|---|
95.03 | 94.91 | 94.92 | 93.54 | 86.91 | 57.99 | 95.04 |
0.001 | 0.01 | 0.1 | 0.2 | 0.3 | 0.4 | |
Accuracy | 94.9 | 95.04 | 93.4 | 92.28 | 91.26 | 97 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, J.; Zhang, B. Collaborative Representation Using Non-Negative Samples for Image Classification. Sensors 2019, 19, 2609. https://doi.org/10.3390/s19112609
Zhou J, Zhang B. Collaborative Representation Using Non-Negative Samples for Image Classification. Sensors. 2019; 19(11):2609. https://doi.org/10.3390/s19112609
Chicago/Turabian StyleZhou, Jianhang, and Bob Zhang. 2019. "Collaborative Representation Using Non-Negative Samples for Image Classification" Sensors 19, no. 11: 2609. https://doi.org/10.3390/s19112609
APA StyleZhou, J., & Zhang, B. (2019). Collaborative Representation Using Non-Negative Samples for Image Classification. Sensors, 19(11), 2609. https://doi.org/10.3390/s19112609