[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-030-86380-7_35guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Training Many-to-Many Recurrent Neural Networks with Target Propagation

Published: 14 September 2021 Publication History

Abstract

Deep neural networks trained with back-propagation have been the driving force for the progress in fields such as computer vision, natural language processing. However, back-propagation has often been criticized for its biological implausibility. More biologically plausible alternatives to backpropagation such as target propagation and feedback alignment have been proposed. But most of these learning algorithms are originally designed and tested for feedforward networks, and their ability for training recurrent networks and arbitrary computation graphs is not fully studied nor understood. In this paper, we propose a learning procedure based on target propagation for training multi-output recurrent networks. It opens doors to extending such biologically plausible models as general learning algorithms for arbitrary graphs.

References

[1]
Bengio Y, Simard P, and Frasconi P Learning long-term dependencies with gradient descent is difficult IEEE Trans. Neural Netw. 1994 5 2 157-166
[2]
Bengio, Y.: How Auto-Encoders could provide credit assignment in deep networks via target propagation. ArXiv (July 2014)
[3]
Bogacz R A tutorial on the free-energy framework for modelling perception and learning J. Math. Psychol. 2017 76 Pt B 198-211
[4]
Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv (December 2014)
[5]
Crick F The recent excitement about neural networks Nature 1989 337 6203 129-132
[6]
Elman JL Finding structure in time Cogn. Sci. 1990 14 2 179-211
[7]
Ernoult, M., Grollier, J., Querlioz, D., Bengio, Y., Scellier, B.: Equilibrium propagation with continual weight updates. ArXiv (April 2020)
[8]
Hochreiter S and Schmidhuber J Long short-term memory Neural Comput. 1997 9 8 1735-1780
[9]
Jordan, M.I.: Chapter 25 - serial order: A parallel distributed processing approach. In: Donahoe, J.W., Packard Dorsel, V. (eds.) Advances in Psychology, vol. 121, pp. 471–495. North-Holland (January 1997)
[10]
Lee D-H, Zhang S, Fischer A, and Bengio Y Appice A, Rodrigues PP, Santos Costa V, Soares C, Gama J, and Jorge A Difference target propagation Machine Learning and Knowledge Discovery in Databases 2015 Cham Springer 498-515
[11]
Lillicrap, T.P., Cownden, D., Tweed, D.B., Akerman, C.J.: Random feedback weights support learning in deep neural networks. ArXiv (November 2014)
[12]
Manchev N and Spratling MW Target propagation in recurrent neural networks J. Mach. Learn. Res. 2020 21 7 1-33
[13]
Nøkland, A.: Direct feedback alignment provides learning in deep neural networks. In: Advances in Neural Information Processing Systems, pp. 1037–1045 (2016)
[14]
Pascanu, R., Mikolov, T., Bengio, Y.: Understanding the exploding gradient problem. ArXiv (2012)
[15]
Rumelhart, D.E., McClelland, J.L.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations, pp. 318–362. MIT Press (1987)
[16]
Scellier B and Bengio Y Equilibrium propagation: bridging the gap between energy-based models and backpropagation Front. Comput. Neurosci. 2017 11 24
[17]
Werbos PJ Backpropagation through time: what it does and how to do it Proc. IEEE 1990 78 10 1550-1560

Index Terms

  1. Training Many-to-Many Recurrent Neural Networks with Target Propagation
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        Artificial Neural Networks and Machine Learning – ICANN 2021: 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, September 14–17, 2021, Proceedings, Part IV
        Sep 2021
        715 pages
        ISBN:978-3-030-86379-1
        DOI:10.1007/978-3-030-86380-7

        Publisher

        Springer-Verlag

        Berlin, Heidelberg

        Publication History

        Published: 14 September 2021

        Author Tags

        1. Artificial neural networks
        2. Recurrent neural networks
        3. Biologically plausible learning
        4. Target propagation
        5. Backpropagation

        Qualifiers

        • Article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 22 Jan 2025

        Other Metrics

        Citations

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media