[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Supervised Feature Selection by Robust Sparse Reduced-Rank Regression

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10086))

Included in the following conference series:

  • 2644 Accesses

Abstract

Feature selection keeping discriminative features (i.e., removing noisy and irrelevant features) from high-dimensional data has been becoming a vital important technique in machine learning since noisy/irrelevant features could deteriorate the performance of classification and regression. Moreover, feature selection has also been applied in all kinds of real applications due to its interpretable ability. Motivated by the successful use of sparse learning in machine learning and reduced-rank regression in statics, we put forward a novel feature selection pattern with supervised learning by using a reduced-rank regression model and a sparsity inducing regularizer during this article. Distinguished from those state-of-the-art attribute selection methods, the present method have described below: (1) built upon an \(\ell _{2,p}\)-norm loss function and an \(\ell _{2,p}\)-norm regularizer by simultaneously considering subspace learning and attribute selection structure into a unite framework; (2) select the more discriminating features in flexible, furthermore, in respect that it may be capable of dominate the degree of sparseness and robust to outlier samples; and (3) also interpretable and stable because it embeds subspace learning (i.e., enabling to output stable models) into the feature selection framework (i.e., enabling to output interpretable results). The relevant results of experiment on eight multi-output data sets indicated the effectiveness of our model compared to the state-of-the-art methods act on regression tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 71.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 89.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://archive.ics.uci.edu/ml/.

  2. 2.

    http://www.csie.ntu.edu.tw/~cjlin/libsvm/.

References

  1. Cai, X., Ding, C., Nie, F., Huang, H.: On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1124–1132. ACM (2013)

    Google Scholar 

  2. Dzeroski, S., Demsar, D., Grbovic, J.: Predicting chemical parameters of river water quality from bioindicator data. Appl. Intell. 13(1), 7–17 (2000)

    Article  Google Scholar 

  3. Gu, Q., Li, Z., Han, J.: Joint feature selection and subspace learning. In: IJCAI 2011, Proceedings of the International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July, pp. 1294–1299 (2011)

    Google Scholar 

  4. Hui, Z., Trevor, H.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc. 67(2), 301–320 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Jorgensen, M.: Iteratively reweighted least squares. Encyclopedia of Environmetrics (2006)

    Google Scholar 

  6. Karalic, A., Bratko, I.: First order regression. Mach. Learn. 26(26), 147–176 (1997)

    Article  MATH  Google Scholar 

  7. Liu, X., Guo, T., He, L., Yang, X.: A low-rank approximation-based transductive support tensor machine for semisupervised classification. IEEE Trans. Image Process. 24(6), 1825–1838 (2015)

    Article  MathSciNet  Google Scholar 

  8. Luo, D., Ding, C.H.Q., Huang, H.: Linear discriminant analysis: New formulations and overfit analysis. In: Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2011, San Francisco, California, USA, 7–11., August 2011

    Google Scholar 

  9. Nie, F., Cai, X., Huang, H., Ding, C.: Efficient and robust feature selection via joint \(\ell _{2,1}\)-norms minimization. In: Advances in Neural Information Processing Systems, pp. 1813–1821 (2010)

    Google Scholar 

  10. Qin, Y., Zhang, S., Zhu, X., Zhang, J., Zhang, C.: Semi-parametric optimization for missing data imputation. Appl. Intell. 27(1), 79–88 (2007)

    Article  MATH  Google Scholar 

  11. Shi, X., Guo, Z., Lai, Z., Yang, Y., Bao, Z., Zhang, D.: A framework of joint graph embedding and sparse regression for dimensionality reduction. IEEE Trans. Image Process. 24(4), 1341–1355 (2015)

    Article  MathSciNet  Google Scholar 

  12. Spyromitros-Xioufis, E., Tsoumakas, G., Groves, W., Vlahavas, I.: Multi-target regression via input space expansion: treating targets as inputs. Mach. Learn. 26, 1–44 (2016)

    MathSciNet  Google Scholar 

  13. Tang, Z., Zhang, X., Li, X., Zhang, S.: Robust image hashing with ring partition and invariant vector distance. IEEE Trans. Inf. Forensics Secur. 11(1), 200–214 (2016)

    Article  Google Scholar 

  14. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Royal Stat. Soc. 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  15. Wang, T., Qin, Z., Zhang, S., Zhang, C.: Cost-sensitive classification with inadequate labeled data. Inf. Syst. 37(5), 508–516 (2012)

    Article  Google Scholar 

  16. Wu, X., Zhang, C., Zhang, S.: Efficient mining of both positive and negative association rules. ACM Trans. Inf. Syst. 22(3), 381–405 (2004)

    Article  Google Scholar 

  17. Wu, X., Zhang, S.: Synthesizing high-frequency rules from different data sources. IEEE Trans. Knowl. Data Eng. 15(2), 353–367 (2003)

    Article  Google Scholar 

  18. Zhang, C., Qin, Y., Zhu, X., Zhang, J., Zhang, S.: Clustering-based missing value imputation for data preprocessing. In: IEEE International Conference on Industrial Informatics, pp. 1081–1086 (2006)

    Google Scholar 

  19. Zhang, S.: Decision tree classifiers sensitive to heterogeneous costs. J. Syst. Softw. 85(4), 771–779 (2012)

    Article  Google Scholar 

  20. Zhang, S., Cheng, D., Zong, M., Gao, L.: Self-representation nearest neighbor search for classification. Neurocomputing 195, 137–142 (2016)

    Article  Google Scholar 

  21. Zhang, S., Li, X., Zong, M., Cheng, D., Gao, L.: Learning k for knn classification. ACM Transactions on Intelligent Systems and Technology (2016, accepted)

    Google Scholar 

  22. Zhang, S., Wu, X., Zhang, C.: Multi-database mining. IEEE Comput. Intell. Bull. 2(1), 5–13 (2003)

    Google Scholar 

  23. Zhu, P., Zuo, W., Zhang, L., Hu, Q., Shiu, S.C.K.: Unsupervised feature selection by regularized self-representation. Pattern Recogn. 48(2), 438–446 (2015)

    Article  Google Scholar 

  24. Zhu, X., Suk, H.I., Shen, D.: A novel matrix-similarity based loss function for joint regression and classification in ad diagnosis. NeuroImage 100, 91–105 (2014)

    Article  Google Scholar 

  25. Zhu, X., Zhang, L., Huang, Z.: A sparse embedding and least variance encoding approach to hashing. IEEE Trans. Image Process. 23(9), 3737–3750 (2014)

    Article  MathSciNet  Google Scholar 

  26. Zhu, X., Zhang, S., Jin, Z., Zhang, Z., Xu, Z.: Missing value estimation for mixed-attribute data sets. IEEE Trans. Knowl. Data Eng. 23(1), 110–121 (2011)

    Article  Google Scholar 

  27. Zhu, X., Zhang, S., Zhang, J., Zhang, C.: Cost-sensitive imputing missing values with ordering. AAAI Press 2, 1922–1923 (2007)

    Google Scholar 

  28. Zhu, Y., Lucey, S.: Convolutional sparse coding for trajectory reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 529–540 (2015)

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported in part by the China “1000-Plan” National Distinguished Professorship; the Nation Natural Science Foundation of China (Grants No: 61263035, 61363009, 61573270 and 61672177), the China 973 Program (Grant No: 2013CB329404); the China Key Research Program (Grant No: 2016YFB1000905); the Guangxi Natural Science Foundation (Grant No: 2015GXNSFCB139011); the China Postdoctoral Science Foundation (Grant No: 2015M570837); the Innovation Project of Guangxi Graduate Education under grant YCSZ2016046; the Guangxi High Institutions’ Program of Introducing 100 High-Level Overseas Talents; the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing; and the Guangxi “Bagui” Teams for Innovation and Research, and the project “Application and Research of Big Data Fusion in Inter-City Traffic Integration of The Xijiang River - Pearl River Economic Belt(da shu jv rong he zai xijiang zhujiang jing ji dai cheng ji jiao tong yi ti hua zhong de ying yong yu yan jiu)”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaofeng Zhu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Hu, R., Zhu, X., He, W., Zhang, J., Zhang, S. (2016). Supervised Feature Selection by Robust Sparse Reduced-Rank Regression. In: Li, J., Li, X., Wang, S., Li, J., Sheng, Q. (eds) Advanced Data Mining and Applications. ADMA 2016. Lecture Notes in Computer Science(), vol 10086. Springer, Cham. https://doi.org/10.1007/978-3-319-49586-6_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-49586-6_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-49585-9

  • Online ISBN: 978-3-319-49586-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics