[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Unified feature extraction framework based on contrastive learning

Published: 22 December 2022 Publication History

Abstract

Feature extraction is an efficient approach for alleviating the issue of dimensionality in high-dimensional data. Contrastive learning (CL), which is a popular self-supervised learning method, has recently attracted considerable attention. In this study, based on a new perspective of CL, we propose a unified framework that is suitable for both unsupervised and supervised feature extraction. In the framework, two CL graphs are first constructed to define the positive and negative pairs uniquely. Subsequently, the projection matrix is determined by minimizing the contrastive loss function. Moreover, the proposed framework considers positive and negative pairs to unify the unsupervised and supervised feature extraction. We propose three specific methods under this framework: unsupervised CL, supervised CL without local preservation, and supervised CL with local preservation. Finally, numerical experiments on six real datasets demonstrate the superior performance of the proposed framework compared to existing methods.

Graphical abstract

Display Omitted

Highlights

A unified feature extraction framework based on contrastive learning is proposed.
The framework proposes a novel approach to define positive and negative pairs.
The framework is regarded as maximizing the mutual information of positive pairs.

References

[1]
Zhao W., Chellappa R., Phillips P.J., Rosenfeld A., Face recognition: A literature survey, ACM Comput. Surv. 35 (4) (2003) 399–458,.
[2]
Gao Q., Xu S., Chen F., Ding C., Gao X., Li Y., R-1-2-DPCA and Face Recognition, IEEE Trans. Cybern. 49 (4) (2019) 1212–1223,.
[3]
Cai H., Zheng V.W., Chang K.C., A comprehensive survey of graph embedding: Problems, techniques, and applications, IEEE Trans. Knowl. Data Eng. 30 (9) (2018) 1616–1637,.
[4]
Jain A.K., Duin R.P.W., Mao J., Statistical pattern recognition: A review, IEEE Trans. Pattern Anal. Mach. Intell. 22 (1) (2000) 4–37,.
[5]
Dornaika F., Bosaghzadeh A., Exponential local discriminant embedding and its application to face recognition, IEEE Trans. Cybern. 43 (3) (2013) 921–934,.
[6]
Ran R., Feng J., Zhang S., Fang B., A general matrix function dimensionality reduction framework and extension for manifold learning, IEEE Trans. Cybern. 52 (4) (2022) 2137–2148,.
[7]
Luo T., Hou C., Nie F., Yi D., Dimension reduction for non-Gaussian data by adaptive discriminative analysis, IEEE Trans. Cybern. 49 (3) (2019) 933–946,.
[8]
Chen X., Wang Q., Zhuang S., Ensemble dimension reduction based on spectral disturbance for subspace clustering, Knowl. Based Syst. 227 (2021),.
[9]
Wang F., Zhu L., Xie L., Zhang Z., Zhong M., Label propagation with structured graph learning for semi-supervised dimension reduction, Knowl. Based Syst. 225 (2021),.
[10]
Li X., Wang Q., Nie F., Chen M., Locality adaptive discriminant analysis framework, IEEE Trans. Cybern. early access (2021) 1–12,.
[11]
Toan N.T., Pham M.T., Nguyen T.T., Huynh T.T., Tong V.V., Nguyen Q.V.H., Quan T.T., Structural representation learning for network alignment with self-supervised anchor links, Expert Syst. Appl. 165 (2021),.
[12]
J. Grill, F. Strub, F. Altché, C. Tallec, P.H. Richemond, E. Buchatskaya, C. Doersch, B.Á. Pires, Z. Guo, M.G. Azar, B. Piot, K. Kavukcuoglu, R. Munos, M. Valko, Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning, in: Proc. NeurIPS, 2020.
[13]
J. Zbontar, L. Jing, I. Misra, Y. LeCun, S. Deny, Barlow Twins: Self-Supervised Learning via Redundancy Reduction, in: Proc. ICML, Vol. 139, 2021, pp. 12310–12320.
[14]
K. He, H. Fan, Y. Wu, S. Xie, R.B. Girshick, Momentum Contrast for Unsupervised Visual Representation Learning, in: Proc. CVPR, 2020, pp. 9726–9735.
[15]
T. Wang, P. Isola, Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere, in: Proc. ICML, Vol. 119, 2020, pp. 9929–9939.
[16]
M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, A. Joulin, Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, in: Proc. NeurIPS, 2020.
[17]
J. Li, P. Zhou, C. Xiong, S.C.H. Hoi, Prototypical Contrastive Learning of Unsupervised Representations, in: Proc. ICLR, 2021.
[18]
Y. Tian, D. Krishnan, P. Isola, Contrastive Multiview Coding, in: Proc. ECCV, Vol. 12356, 2020, pp. 776–794.
[19]
T. Chen, S. Kornblith, M. Norouzi, G.E. Hinton, A Simple Framework for Contrastive Learning of Visual Representations, in: Proc. ICML, Vol. 119, 2020, pp. 1597–1607.
[20]
Oh J., Kwak N., Generalized mean for robust principal component analysis, Pattern Recognit. 54 (2016) 116–127,.
[21]
Martínez A.M., Kak A.C., PCA versus LDA, IEEE Trans. Pattern Anal. Mach. Intell. 23 (2) (2001) 228–233,.
[22]
Dornaika F., Khoder A., Linear embedding by joint robust discriminant analysis and inter-class sparsity, Neural Netw. 127 (2020) 141–159,.
[23]
Mo D., Lai Z., Wong W., Locally joint sparse marginal embedding for feature extraction, IEEE Trans. Multim. 21 (12) (2019) 3038–3052,.
[24]
Chen Y., Lai Z., Wong W.K., Shen L., Hu Q., Low-rank linear embedding for image recognition, IEEE Trans. Multim. 20 (12) (2018) 3212–3222,.
[25]
Yi Y., Wang J., Zhou W., Fang Y., Kong J., Lu Y., Joint graph optimization and projection learning for dimensionality reduction, Pattern Recognit. 92 (2019) 258–273,.
[26]
Chen P., Jiao L., Liu F., Zhao J., Zhao Z., Liu S., Semi-supervised double sparse graphs based discriminant analysis for dimensionality reduction, Pattern Recognit. 61 (2017) 361–378,.
[27]
Belous G., Busch A., Gao Y., Dual subspace discriminative projection learning, Pattern Recognit. 111 (2021),.
[28]
Tenenbaum J., de Silva V., Langford J., A global geometric framework for nonlinear dimensionality reduction, Science 290 (5500) (2000) 2319–2323,.
[29]
Belkin M., Niyogi P., Laplacian eigenmaps for dimensionality reduction and data representation, Neural Comput. 15 (6) (2003) 1373–1396,.
[30]
Roweis S., Saul L., Nonlinear dimensionality reduction by locally linear embedding, Science 290 (2000) 2323–2326,.
[31]
X. He, P. Niyogi, Locality Preserving Projections, in: Proc. NeurIPS, 2003, pp. 153–160.
[32]
X. He, D. Cai, S. Yan, H. Zhang, Neighborhood Preserving Embedding, in: Proc. ICCV, 2005, pp. 1208–1213.
[33]
D. Cai, X. He, J. Han, Isometric Projection, in: Proc. AAAI, 2007, pp. 528–533.
[34]
Qiao L., Chen S., Tan X., Sparsity preserving projections with applications to face recognition, Pattern Recognit. 43 (1) (2010) 331–341,.
[35]
Yang W., Wang Z., Sun C., A collaborative representation based projections method for feature extraction, Pattern Recognit. 48 (1) (2015) 20–27,.
[36]
Zhang Y., Xiang M., Yang B., Low-rank preserving embedding, Pattern Recognit. 70 (2017) 112–125,.
[37]
Belhumeur P.N., Hespanha J.P., Kriegman D.J., Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 711–720,.
[38]
Sugiyama M., Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis, J. Mach. Learn. Res. 8 (2007) 1027–1061.
[39]
Yan S., Xu D., Zhang B., Zhang H., Yang Q., Lin S., Graph embedding and extensions: A general framework for dimensionality reduction, IEEE Trans. Pattern Anal. Mach. Intell. 29 (1) (2007) 40–51,.
[40]
Huang Z., Zhu H., Zhou J.T., Peng X., Multiple marginal Fisher analysis, IEEE Trans. Ind. Electron. 66 (12) (2019) 9798–9807,.
[41]
Ren Y., Wang Z., Chen Y., Zhao W., Sparsity preserving discriminant projections with applications to face recognition, Math. Probl. Eng. 2016 (2016) 1–12,.
[42]
Hong C., Yu J., Zhang J., Jin X., Lee K., Multimodal face-pose estimation with multitask manifold deep learning, IEEE Trans. Ind. Informatics 15 (7) (2019) 3952–3961,.
[43]
Hong C., Yu J., Wan J., Tao D., Wang M., Multimodal deep autoencoder for human pose recovery, IEEE Trans. Image Process. 24 (12) (2015) 5659–5670,.
[44]
Hong C., Yu J., Tao D., Wang M., Image-based three-dimensional human pose recovery by multiview locality-sensitive sparse retrieval, IEEE Trans. Ind. Electron. 62 (6) (2015) 3742–3751,.
[45]
D.P. Kingma, J. Ba, Adam: A Method for Stochastic Optimization, in: Proc. ICLR (Poster), 2015.
[46]
Chen H., Nie F., Wang R., Li X., Adaptive flexible optimal graph for unsupervised dimensionality reduction, IEEE Signal Process. Lett. 28 (2021) 2162–2166,.
[47]
Li X., Wang Q., Nie F., Chen M., Locality adaptive discriminant analysis framework, IEEE Trans. Cybern. early access (2021) 1–12,.

Index Terms

  1. Unified feature extraction framework based on contrastive learning
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Knowledge-Based Systems
          Knowledge-Based Systems  Volume 258, Issue C
          Dec 2022
          1196 pages

          Publisher

          Elsevier Science Publishers B. V.

          Netherlands

          Publication History

          Published: 22 December 2022

          Author Tags

          1. Feature extraction
          2. Dimension reduction
          3. Self-supervised learning
          4. Contrastive learning

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 13 Dec 2024

          Other Metrics

          Citations

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media