[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Analysis of Encoder Representations as Features Using Sparse Autoencoders in Gradient Boosting and Ensemble Tree Models

  • Conference paper
Advances in Artificial Intelligence – IBERAMIA 2018 (IBERAMIA 2018)

Abstract

The performance of learning algorithms relies on factors such as the training strategy, the parameter tuning approach, and data complexity; in this scenario, extracted features play a fundamental role. Since not all the features maintain useful information, they can add noise, thus decreasing the performance of the algorithms. To address this issue, a variety of techniques such as feature ex-traction, feature engineering and feature selection have been developed, most of which fall into the unsupervised learning category. This study explores the generation of such features, using a set of k encoder layers, which are used to produce a low dimensional feature set F. The encoder layers were trained using a two-layer depth sparse autoencoder model, where PCA was used to estimate the right number of hidden units in the first layer. Then, a set of four algorithms, which belong to the gradient boosting and ensemble families were trained using the generated features. Finally, a performance comparison, using the encoder features against the original features was made. The results show that by using the reduced features it is possible to achieve equal or better results. Also, the approach improves more with highly imbalanced data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Martínez-Romo, J.C., Luna-rosas, F.J., Mora-gonzález, M., De Luna-ortega, C.A.: Optimal feature generation with genetic algorithms and FLDR in a restricted-vocabulary speech recognition system. In: Bio-Inspired Computational Algorithms and Their Applications, pp. 235–262 (2012). https://doi.org/10.5772/36135

  2. Cheng, W., Kasneci, G., Graepel, T., Stern, D., Herbrich, R.: Automated feature generation from structured knowledge. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM 2011, p. 1395 (2011). https://doi.org/10.1145/2063576.2063779

  3. Katz, G., Shin, E.C.R., Song, D.: ExploreKit: automatic feature generation and selection. In: Proceedings - IEEE 16th International Conference on Data Mining (ICDM), pp. 979–984 (2016). https://doi.org/10.1109/ICDM.2016.0123

  4. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986). https://doi.org/10.1038/323533a0

    Article  Google Scholar 

  5. Ng, A.: Sparse autoencoder. In: CS294A Lecture Notes, pp. 1–19 (2011). http://web.stanford.edu/class/cs294a/sae/sparseAutoencoderNotes.pdf

  6. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes, pp. 1–14 (2013). https://arxiv.org/abs/1312.6114

  7. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of 25th Annual International Conference on Machine Learning, ICML 2008, pp. 1096–1103 (2008). https://doi.org/10.1145/1390156.1390294

  8. Baldi, P.: Autoencoders, unsupervised learning, and deep architectures. In: Guyon, I., Dror, G., Lemaire, V., Taylor, G.W., Silver, D.L. (eds.) ICML Unsupervised and Transfer Learning, pp. 37–50 (2012). JMLR.org

  9. Yu, W., Zeng, G., Luo, P., Zhuang, F., He, Q., Shi, Z.: Embedding with autoencoder regularization. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 208–223. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_14

    Chapter  Google Scholar 

  10. Bosch, N., Paquette, L.: Unsupervised deep autoencoders for feature extraction with educational data. In: Deep Learning with Educational Data Workshop at the 10th International Conference on Educational Data Mining (2017)

    Google Scholar 

  11. Meng, Q., Catchpoole, D., Skillicom, D., Kennedy, P.J.: Relational autoencoder for feature extraction. In: Proceedings of International Joint Conference Neural Networks, May 2017, pp. 364–371 (2017). https://doi.org/10.1109/ijcnn.2017.7965877

  12. DeVries, T., Taylor, G.W.: Dataset augmentation in feature space, pp. 1–12 (2017). https://arxiv.org/abs/1702.05538v1

  13. Yousefi-azar, M., Varadharajan, V., Hamey, L., Tupakula, U.: Autoencoder-based feature learning for cyber security applications. In: International Joint Conference on Neural Networks 2017 (IJCNN), pp. 3854–3861 (2017). https://doi.org/10.1109/IJCNN.2017.7966342

  14. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1798–1828 (2013). https://doi.org/10.1109/TPAMI.2013.50

  15. Makhzani, A., Frey, B.: k-sparse autoencoders (2013). https://arxiv.org/abs/1312.5663

  16. Ju, Y., Guo, J., Liu, S.: A deep learning method combined sparse autoencoder with SVM. In: 2015 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, pp. 257–260. IEEE (2015). https://doi.org/10.1109/CyberC.2015.39

  17. Kampffmeyer, M., Løkse, S., Bianchi, F.M., Jenssen, R., Livi, L.: Deep kernelized autoencoders. In: Sharma, P., Bianchi, F. (eds.) Image Analysis. SCIA 2017. LNCS, vol. 10269, pp. 419–430. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59126-1_35

    Chapter  Google Scholar 

  18. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  Google Scholar 

  19. Chollet, F.: Keras. GitHub Repos (2015). https://keras.io/

  20. Olson, R.S., La Cava, W., Orzechowski, P., Urbanowicz, R.J., Moore, J.H.: PMLB: a large benchmark suite for machine learning evaluation and comparison. BioData Min. 10, 36 (2017). https://doi.org/10.1186/s13040-017-0154-4

    Article  Google Scholar 

  21. Ke, G., Meng, Q., Wang, T., Chen, W., Ma, W., Liu, T.-Y.: LightGBM: a highly efficient gradient boosting decision tree. Adv. Neural. Inf. Process. Syst. 30, 3148–3156 (2017)

    Google Scholar 

  22. Dorogush, A.V., Ershov, V., Yandex, A.G.: CatBoost: gradient boosting with categorical features support. In: Workshop on ML System, NIPS 2017, pp. 1–7 (2017)

    Google Scholar 

  23. Hastie, T., Rosset, S., Zhu, J., Zou, H.: Multi-class AdaBoost. Stat. Interface 2, 349–360 (2009). https://doi.org/10.4310/SII.2009.v2.n3.a8

    Article  MathSciNet  Google Scholar 

  24. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001). https://doi.org/10.1023/A:1010933404324

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Luis Aguilar or L. Antonio Aguilar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Cite this paper

Aguilar, L., Aguilar, L.A. (2018). Analysis of Encoder Representations as Features Using Sparse Autoencoders in Gradient Boosting and Ensemble Tree Models. In: Simari, G.R., Fermé, E., Gutiérrez Segura, F., Rodríguez Melquiades, J.A. (eds) Advances in Artificial Intelligence – IBERAMIA 2018. IBERAMIA 2018. Lecture Notes in Computer Science(), vol 11238. Springer, Cham. https://doi.org/10.1007/978-3-030-03928-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03928-8_13

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03927-1

  • Online ISBN: 978-3-030-03928-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics