[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7552))

Included in the following conference series:

  • 4151 Accesses

Abstract

In this paper we define a kernel function which is the dual space equivalent of infinitely large sparse threshold unit networks. We first explain how to couple a kernel function to an infinite recurrent neural network, and next we use this definition to apply the theory to sparse threshold unit networks. We validate this kernel function with a theoretical analysis and an illustrative signal processing task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Arsie, I., Pianese, C., Sorrentino, M.: Development and real-time implementation of recurrent neural networks for afr control and prediction. SAE International Journal of Passenger Cars - Electronic Electrical Systems 1(1), 403–412 (2009)

    Google Scholar 

  2. Graves, A., Eck, D., Beringer, N., Schmidhuber, J.: Biologically plausible speech recognition with LSTM neural nets. In: Proceedings of Bio-ADIT, pp. 127–136 (2004)

    Google Scholar 

  3. Hay, E., Hill, S., Schürmann, F., Markram, H., Segev, I.: Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLoS Computational Biology 7(7), e1002107 (2011)

    Article  Google Scholar 

  4. Hermans, M., Schrauwen, B.: Recurrent kernel machines: Computing with infinite echo state networks. Neural Computation 24(1), 104–133 (2011)

    Article  MathSciNet  Google Scholar 

  5. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79(8), 2554–2558 (1982)

    Article  MathSciNet  Google Scholar 

  6. Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: Advances in Neural Information Processing Systems, pp. 593–600 (2003)

    Google Scholar 

  7. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless telecommunication. Science 308, 78–80 (2004)

    Article  Google Scholar 

  8. Kauffman, S.A.: Metabolic stability and epigenesis in randomly constructed genetic nets. Journal of Theoretical Biology 22(3), 437–467 (1969)

    Article  Google Scholar 

  9. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14(11), 2531–2560 (2002)

    Article  MATH  Google Scholar 

  10. Natschläger, T., Maass, W.: Information dynamics and emergent computation in recurrent circuits of spiking neurons. In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Proc. of NIPS 2003, Advances in Neural Information Processing Systems, vol. 16, pp. 1255–1262. MIT Press, Cambridge (2004)

    Google Scholar 

  11. Neal, R.M.: Bayesian Learning for Neural Networks. Springer (1996)

    Google Scholar 

  12. Steil, J.J.: Stability of backpropagation-decorrelation efficient O(N) recurrent learning. In: Proceedings of ESANN 2005, Brugge (2005)

    Google Scholar 

  13. Suykens, J., Vandewalle, J. (eds.): Enhanced Multi-Stream Kalman Filter Training for Recurrent Networks. In: Nonlinear Modeling: Advanced Black-Box Techniques, pp. 29–53. Kluwer Academic Publishers (1998)

    Google Scholar 

  14. Williams, C.K.A.: Computation with infinite neural networks. Neural Computation 10, 1203–1216 (1998)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hermans, M., Schrauwen, B. (2012). Infinite Sparse Threshold Unit Networks. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds) Artificial Neural Networks and Machine Learning – ICANN 2012. ICANN 2012. Lecture Notes in Computer Science, vol 7552. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33269-2_77

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33269-2_77

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33268-5

  • Online ISBN: 978-3-642-33269-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics