[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article

On-line learning for very large data sets: Research Articles

Published: 01 March 2005 Publication History

Abstract

The design of very large learning systems presents many unsolved challenges. Consider, for instance, a system that ‘watches’ television for a few weeks and learns to enumerate the objects present in these images. Most current learning algorithms do not scale well enough to handle such massive quantities of data. Experience suggests that the stochastic learning algorithms are best suited to such tasks. This is at first surprising because stochastic learning algorithms optimize the training error rather slowly. Our paper reconsiders the convergence speed in terms of how fast a learning algorithm optimizes the testing error. This reformulation shows the superiority of the well designed stochastic learning algorithm. Copyright © 2005 John Wiley & Sons, Ltd.

Cited By

View all
  • (2020)Tight nonparametric convergence rates for stochastic gradient descent under the noiseless linear modelProceedings of the 34th International Conference on Neural Information Processing Systems10.5555/3495724.3495941(2576-2586)Online publication date: 6-Dec-2020
  • (2018)Variance Amplification of Accelerated First-Order Algorithms for Strongly Convex Quadratic Optimization Problems2018 IEEE Conference on Decision and Control (CDC)10.1109/CDC.2018.8619183(5753-5758)Online publication date: 17-Dec-2018
  • (2017)Sharp minima can generalize for deep netsProceedings of the 34th International Conference on Machine Learning - Volume 7010.5555/3305381.3305487(1019-1028)Online publication date: 6-Aug-2017
  • Show More Cited By
  1. On-line learning for very large data sets: Research Articles

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Applied Stochastic Models in Business and Industry
    Applied Stochastic Models in Business and Industry  Volume 21, Issue 2
    Statistical Learning
    March 2005
    126 pages
    ISSN:1524-1904
    EISSN:1526-4025
    Issue’s Table of Contents

    Publisher

    John Wiley and Sons Ltd.

    United Kingdom

    Publication History

    Published: 01 March 2005

    Author Tags

    1. convergence speed
    2. learning
    3. online learning
    4. stochastic optimization

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 10 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2020)Tight nonparametric convergence rates for stochastic gradient descent under the noiseless linear modelProceedings of the 34th International Conference on Neural Information Processing Systems10.5555/3495724.3495941(2576-2586)Online publication date: 6-Dec-2020
    • (2018)Variance Amplification of Accelerated First-Order Algorithms for Strongly Convex Quadratic Optimization Problems2018 IEEE Conference on Decision and Control (CDC)10.1109/CDC.2018.8619183(5753-5758)Online publication date: 17-Dec-2018
    • (2017)Sharp minima can generalize for deep netsProceedings of the 34th International Conference on Machine Learning - Volume 7010.5555/3305381.3305487(1019-1028)Online publication date: 6-Aug-2017
    • (2017)An Online Causal Inference Framework for Modeling and Designing Systems Involving User PreferencesJournal of Electrical and Computer Engineering10.1155/2017/10483852017Online publication date: 22-Jun-2017
    • (2017)A double incremental aggregated gradient method with linear convergence rate for large-scale optimization2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2017.7953047(4696-4700)Online publication date: 5-Mar-2017
    • (2017)An incremental quasi-Newton method with a local superlinear convergence rate2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2017.7952915(4039-4043)Online publication date: 5-Mar-2017
    • (2017)A Diagonal-Augmented quasi-Newton method with application to factorization machines2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2017.7952641(2671-2675)Online publication date: 5-Mar-2017
    • (2017)Highly efficient hierarchical online nonlinear regression using second order methodsSignal Processing10.1016/j.sigpro.2017.01.029137:C(22-32)Online publication date: 1-Aug-2017
    • (2016)Performance-portable autotuning of OpenCL kernels for convolutional layers of deep neural networksProceedings of the Workshop on Machine Learning in High Performance Computing Environments10.5555/3018874.3018876(9-18)Online publication date: 13-Nov-2016
    • (2016)Expected tensor decomposition with stochastic gradient descentProceedings of the Thirtieth AAAI Conference on Artificial Intelligence10.5555/3016100.3016167(1919-1925)Online publication date: 12-Feb-2016
    • Show More Cited By

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media