[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3505688.3505707acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicraiConference Proceedingsconference-collections
research-article

Kernel-Based Autoencoders for Large-Scale Representation Learning

Published: 09 April 2022 Publication History

Abstract

A primary challenge in kernel-based representation learning comes from the massive data and the excess noise feature. To breakthrough this challenge, this paper investigates a deep stacked autoencoder framework, named improved kernelized pseudoinverse learning autoencoders (IKPILAE), which extracts representation information from each building blocks. The IKPILAE consists of two core modules. The first module is used to extract random features from large-scale training data by the approximate kernel method. The second module is a typical pseudoinverse learning algorithm. To diminish the tendency of overfitting in neural networks, a weight decay regularization term is added to the loss function to learn a more generalized representation. Through numerical experiments on benchmark dataset, we demonstrate that IKPILAE outperforms state-of-the-art methods in the research of kernel-based representation learning.

References

[1]
Yoshua Bengio. 2009. Learning Deep Architectures for AI. Now Publishers Inc.
[2]
Xiaodan Deng, Sibo Feng, Ping Guo, and Qian Yin. 2018. Fast image recognition with Gabor filter and pseudoinverse learning autoencoders. In International Conference on Neural Information Processing. Springer, 501–511.
[3]
Petros Drineas, Michael W Mahoney, and Nello Cristianini. 2005. On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning.Journal of Machine Learning Research 6, 12 (2005), 2153–2175.
[4]
Ping Guo, C.L. Philip Chen, and Yinguan Sun. 1995. An exact supervised learning for a three-layer supervised neural network. In Proceedings of 1995 International Conference on Neural Information Processing. 1041–1044.
[5]
Ping Guo and Michael R Lyu. 2004. A pseudoinverse learning algorithm for feedforward neural networks with stacked generalization applications to software reliability growth data. Neurocomputing 56(2004), 101–121.
[6]
Ping Guo, Michael R Lyu, and NE Mastorakis. 2001. Pseudoinverse learning algorithm for feedforward neural networks. Advances in Neural Networks and Applications 1 (2001), 321–326.
[7]
Ping Guo and Bo Zhao. 2019. An Unified View on the Feedforward Neural Network Architecture. In International Conference on Swarm Intelligence. Springer, 173–180.
[8]
Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. 2008. Kernel Methods in Machine Learning. The Annals of Statistics 36, 3 (2008), 1171–1220.
[9]
Zhiyun Lu, Avner May, K. Liu, Alireza Bagheri Garakani, Dong Guo, Aurélien Bellet, Linxi Fan, Michael Collins, Brian Kingsbury, M. Picheny, and Fei Sha. 2014. How to Scale Up Kernel Methods to Be As Good As Deep Neural Nets., Article arXiv preprint (2014). arXiv:arXiv: 1411.4000
[10]
Song Mei, Theodor Misiakiewicz, and Andrea Montanari. 2021. Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration., Article arXiv preprint (2021). arXiv:arXiv: 2101.10588
[11]
Ali Rahimi, Benjamin Recht, 2007. Random Features for Large-Scale Kernel Machines. In Advances in Neural Information Processing Systems, Vol. 20. 1177–1184.
[12]
David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning representations by back-propagating errors. Nature 323(1986), 533–536.
[13]
Simone Scardapane and Dianhui Wang. 2017. Randomness in neural networks: an overview. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 7, 2(2017), e1200.
[14]
DAFMB Scholkopf, F Achlioptas, and M Bernhard. 2002. Sampling techniques for kernel methods. Advances in neural information processing systems 14 (2002), 335–342.
[15]
Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and Léon Bottou. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.Journal of machine learning research 11, 12 (2010), 3371–3408.
[16]
Ke Wang and Ping Guo. 2021. An Ensemble Classification Model With Unsupervised Representation Learning for Driving Stress Recognition Using Physiological Signals. IEEE Transactions on Intelligent Transportation Systems 22, 6(2021), 3303–3315.
[17]
Ke Wang, Ping Guo, Xin Xin, and Zebin Ye. 2017. Autoencoder, low rank approximation and pseudoinverse learning algorithm. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 948–953.
[18]
Bingxin Xu and Ping Guo. 2018. Pseudoinverse learning algorithom for fast sparse autoencoder training. In IEEE Congress on Evolutionary Computation. 1–6.
[19]
Min Yan, Qian Yin, and Ping Guo. 2016. Image stitching with single-hidden layer feedforward neural networks. In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 4162–4169.
[20]
Qian Yin, Bingxin Xu, Kaiyan Zhou, and Ping Guo. 2021. Bayesian Pseudoinverse Learners: From Uncertainty to Deterministic Learning. IEEE transactions on cybernetics(2021). https://doi.org/10.1109/tcyb.2021.3079906

Cited By

View all
  • (2024)Exploring the landscape of compressed DeepFakes: Generation, dataset and detectionNeurocomputing10.1016/j.neucom.2024.129116(129116)Online publication date: Dec-2024

Index Terms

  1. Kernel-Based Autoencoders for Large-Scale Representation Learning
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICRAI '21: Proceedings of the 7th International Conference on Robotics and Artificial Intelligence
      November 2021
      135 pages
      ISBN:9781450385855
      DOI:10.1145/3505688
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 09 April 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Autoencoder
      2. Kernel approximation
      3. Pseudoinvere learning algorithm
      4. Representation learning

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • the National Key Research and Development Program of China

      Conference

      ICRAI 2021

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)10
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 10 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Exploring the landscape of compressed DeepFakes: Generation, dataset and detectionNeurocomputing10.1016/j.neucom.2024.129116(129116)Online publication date: Dec-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media