[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Robust low-rank representation with adaptive graph regularization from clean data

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The goal of subspace clustering (SC) methods is to group data samples, which are supposed to be drawn from a union of subspaces, into their underlying subspaces. In the SC problem, the assumption, which is called self-expressive assumption of data samples, has been proven effectively. Low-rank representation (LRR) is a recently proposed famous self-expressive method. However, the LRR method can only capture the global structure among data, and the local structure of data is neglected. Besides, although the real data are often corrupted, the LRR model uses the noisy data instead of the clean data as the dictionary. Then, the learned similarity matrix may be not reliable. To address these problem, in this paper, we propose a novel subspace clustering method called robust low-rank representation with adaptive graph regularization from clean data (RLRR-AGR). In the RLRR-AGR model, a graph regularization term is integrated into the framework of LRR. That is, the graph construction and subsequent optimization are in a unified framework. Then the intrinsic non-linear geometric information in data can be captured. More importantly, the graph can adaptive updated from the clean data instead of the raw data. The clean data are obtained by removing noise in the raw data. Then the clustering performance can be improved. By using the augmented Lagrangian method, an efficient algorithm is also presented to solve the RLRR-AGR model. The experimental results on some data sets show that the proposed RLRR-AGR model outperforms many state-of-the-art clustering approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Vidal R (2011) Subspace clustering. IEEE Signal Process Mag 28(2):52–68

    Article  Google Scholar 

  2. Zhu W, Peng B (2020) Sparse and low-rank regularized deep subspace clustering. Knowl-Based Syst 204(9):1–8

    Google Scholar 

  3. Du H, Ma L, Li G, Wang S (2020) Low-rank graph preserving discriminative dictionary learning for image recognition. Knowl-Based Syst 187(1):1–14

    Google Scholar 

  4. Yang X, Jiang X, Tian C, Wang P, Zhou F, Fujita H (2020) Inverse projection group sparse representation for tumor classification: a low rank variation dictionary approach. Knowl-Based Syst 196(5):1–16

    Google Scholar 

  5. Rao SR, Tron R, Vidal R, Ma Y (2010) Motion segmentation via robust subspace separation in the presence of outlying, incomplete, or corrupted trajectories. IEEE Trans. Patt Anal Mach Intell 32(10):1832–1845

    Article  Google Scholar 

  6. Vidal R, Ma Y, Sastry S (2005) Generalized principal component analysis. IEEE Trans. Patt Anal Mach Intell 27(12):1945–1959

    Article  Google Scholar 

  7. Tsakiris MC, Vidal R (2018) Algebraic clustering of affine subspaces. IEEE Trans. Patt Anal Mach Intell 40(2):482–489

    Article  Google Scholar 

  8. Elhamifar E, Vidal R (2013) Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans Patt Anal Mach Intell 35(11):2765–2781

    Article  Google Scholar 

  9. Wen J, Xu Y, Liu H (2020) Incomplete Multiview spectral clustering with adaptive graph learning. IEEE Trans. Cybern 50(4):1418–1429

    Article  Google Scholar 

  10. Brbic M, Kopriva I (2020) L0-motivated low-rank sparse subspace. IEEE Trans. Cybern 50(4):1711–1725

    Article  Google Scholar 

  11. Ng AY, M. I. Jordan, and Y. Weiss (2002) On spectral clustering: Analysis and an algorithm, in Proceedings of the Neural Information Processing Systems, pp. 849–856

  12. Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans. Patt Anal Mach Intell 22(8):888–905

    Article  Google Scholar 

  13. Nie F, Huang H (2016) Subspace clustering via new low-rank model with discrete group structure constraint, in Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, New York, USA, pp. 1874–1880

  14. Nie F, Wang X, Jordan MI, Huang H (2016) The Constrained Laplacian rank algorithm for graph-based clustering, in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona pp. 1969–1976

  15. Huang J, Nie F, Huang H (2015) A new simplex sparse learning model to measure data similarity for clustering, in Proceedings of the 24th International Conference on Artificial Intelligence, Buenos Aires, Argentina, pp. 3569–3575

  16. Wen J, Zhang B, Xu Y, Yang J, Han N (2018) Adaptive weighted nonnegative low-rank representation. Pattern Recogn 81(9):326–340

    Article  Google Scholar 

  17. Zhan K, Niu C, Chen C, Nie F, Zhang C, Yang Y (2019) Graph structure fusion for multiview clustering. IEEE Trans Knowl Data Eng 31(10):1984–1993

    Article  Google Scholar 

  18. Wen J, Fang X, Xu Y, Tian C, Fei L (2018) Low-rank representation with adaptive graph regularization. Neural Netw 108(12):83–96

    Article  Google Scholar 

  19. Yin M, Xie S, Wu Z, Zhang Y, Gao J (2018) Subspace clustering via learning an adaptive low-rank graph. IEEE Trans Image Process 27(8):3716–3728

    Article  MathSciNet  Google Scholar 

  20. Nie F, Wang X, Huang H (2014) Clustering and projected clustering with adaptive neighbors, in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, New York, New York, USA, pp. 977–986

  21. Liu Z, Wang J, Liu G, Zhang L (2019) Discriminative low-rank preserving projection for dimensionality reduction. Appl Soft Comput 82(12):105768

    Article  Google Scholar 

  22. Liu Z, Lai Z, Ou W, Zhang K, Zheng R (2020) Structured optimal graph based sparse feature extraction for semi-supervised learning. Signal Process 170(5):1–9

    Google Scholar 

  23. Xu J, Yu M, Shao L, Zuo W, Meng D, Zhang L, Zhang D (2021) Scaled simplex representation for subspace clustering. IEEE Trans Cybern 51(3):1493–1505

    Article  Google Scholar 

  24. Kang Z, Pan H, Hoi SCH, Xu Z (2020) Robust graph learning from noisy data. IEEE Trans. Cybern 50(5):1833–1843

    Article  Google Scholar 

  25. Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? , J ACM, vol. 58, no. 3, pp. Article 11

  26. Lin Z, Chen M, Ma Y (2009) The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, UIUC Technica Report, Rep. UILU-ENG-09-2215

  27. Wright J, Yang A, Sastry S, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Patt Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  28. Cheng B, Yang J, Yan S, Fu Y, Huang TS (2010) Learning with L1 graph for image analysis. IEEE Trans Image Process 18(4):858–866

    Article  Google Scholar 

  29. Qiao L, Chen S, Tan X (2010) Sparsity preserving projections with applications to face recognition. Pattern Recogn 43(1):331–341

    Article  Google Scholar 

  30. Hu C, Wang Y, Gu J (2020) Cross-domain intelligent fault classification of bearings based on tensor-aligned invariant subspace learning and two-dimensional convolutional neural networks. Knowl-Based Syst 209(12):1–11

    Google Scholar 

  31. Hu C, He S, Wang Y (2021) A classification method to detect faults in a rotating machinery based on kernelled support tensor machine and multilinear principal component analysis. Appl Intell 51:2609–2621

    Article  Google Scholar 

  32. Roweis ST, Saul LK (2000) Nonlinear dimension reduction by locally linear embedding. Science 290(5500):2323–2326

    Article  Google Scholar 

  33. Zhuang L, Gao H, Lin Z, Ma Y, Zhang X, Yu N (2012) Non-negative low rank and sparse graph for semi-supervised learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, pp. 2328–2335

  34. Liu G, Lin Z, Yan S, Sun J, Yu Y, Ma Y (2013) Robust recovery of subspace structures by low-rank representation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 35, no. 171–184

  35. Liu G, Yan S (2011) Latent low-rank representation for subspace segmentation and feature extraction, in International Conference on Computer Vision, Barcelona, Spain, pp. 1615–1622

  36. Deng T, Ye D, Ma R, Fujita H, Xiong L (2020) Low-rank local tangent space embedding for subspace clustering. Inf Sci 508(1):1–21

    Article  MathSciNet  Google Scholar 

  37. Yin M, Gao J, Lin Z (2016) Laplacian regularized low-rank representation and its applications. IEEE Trans. Patt Anal Mach Intell 28(3):504–517

    Article  Google Scholar 

  38. Liu J, Chen Y, Zhang J, Xu Z (2014) Enhancing low-rank subspace clustering by manifold regularization. IEEE Trans Image Process 23(9):4022–4030

    Article  MathSciNet  Google Scholar 

  39. Siming W, Zhouchen L (2011) Analysis and improvement of low rank representation for subspace segmentation, pp. 1–13

  40. Lin Z, Liu R, Su Z (2011) Linearized alternating direction method with adaptive penalty for low rank representation, in Advances in Neural Information Processing Systems, Granada, Spain, pp. 612–620

  41. Cai J-F, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982

    Article  MathSciNet  Google Scholar 

  42. Yang J, Yin W, Zhang Y, Wang Y (2009) A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J Imaging Sci 2(2):569–592

    Article  MathSciNet  Google Scholar 

  43. Xiao S, Tan M, Xu D, Dong ZY (2016) Robust kernel low-rank representation. IEEE Trans Neural Netw Learn Syst 27(11):2268–2281

    Article  MathSciNet  Google Scholar 

  44. Xie X, Guo X, Liu G, Wang J (2018) Implicit block diagonal low-rank representation. IEEE Trans Image Process 27(1):477–489

    Article  MathSciNet  Google Scholar 

Download references

Funding

This research is supported by NSFC of China (No. 61976005, 61772277); the Anhui Natural Science Foundation (No. 1908085MF183); the Safety-Critical Software Key Laboratory Research Program (Grant No. NJ2018014); the Training Program for Young and Middle-aged Top Talents of Anhui Polytechnic University (No. 201812); the State Key Laboratory for Novel Software Technology (Nanjing University) Research Program (No.KFKT2019B23);the Major Project of Natural Science Research in Colleges and Universities of Anhui Province (No. KJ2019ZD15).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gui-Fu Lu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, GF., Wang, Y. & Tang, G. Robust low-rank representation with adaptive graph regularization from clean data. Appl Intell 52, 5830–5840 (2022). https://doi.org/10.1007/s10489-021-02749-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02749-w

Keywords

Navigation