[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3407197.3407201acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiconsConference Proceedingsconference-collections
short-paper

Effective Pruning of Binary Activation Neural Networks

Published: 28 July 2020 Publication History

Abstract

Deep learning networks have become a vital tool for image and data processing tasks for deployed and edge applications. Resource constraints, particularly low power budgets, have motivated methods and devices for efficient on-edge inference. Two promising methods are reduced precision communication networks (e.g. binary activation spiking neural networks) and weight pruning. In this paper, we provide a preliminary exploration for combining these two methods, specifically in-training weight pruning of whetstone networks, to achieve deep networks with both sparse weights and binary activations.

References

[1]
[n.d.]. Tensorflow Model Optimization. https://github.com/tensorflow/model-optimization
[2]
M Gethsiyal Augasta and Thangairulappan Kathirvalavakumar. 2013. Pruning algorithms of neural networks—a comparative study. Central European Journal of Computer Science 3, 3 (2013), 105–115.
[3]
James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Advances in neural information processing systems. 2546–2554.
[4]
Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. 2020. What is the State of Neural Network Pruning?arXiv preprint arXiv:2003.03033(2020).
[5]
Peter Blouw, Xuan Choo, Eric Hunsberger, and Chris Eliasmith. 2018. Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware. arXiv preprint arXiv:1812.01739(2018).
[6]
Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 1 (2018), 82–99.
[7]
SK Esser, PA Merolla, JV Arthur, AS Cassidy, R Appuswamy, A Andreopoulos, DJ Berg, JL McKinstry, T Melano, DR Barch, 2016. Convolutional networks for fast, energy-efficient neuromorphic computing. 2016. Preprint on ArXiv. http://arxiv. org/abs/1603.08270. Accessed 27 (2016).
[8]
Steve K Esser, Rathinakumar Appuswamy, Paul Merolla, John V Arthur, and Dharmendra S Modha. 2015. Backpropagation for energy-efficient neuromorphic computing. In Advances in neural information processing systems. 1117–1125.
[9]
Steve Furber. 2016. Large-scale neuromorphic computing systems. Journal of neural engineering 13, 5 (2016), 051001.
[10]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149(2015).
[11]
Eric Hunsberger and Chris Eliasmith. 2016. Training spiking deep networks for neuromorphic hardware. arXiv preprint arXiv:1611.05141(2016).
[12]
Javier Iglesias, Jan Eriksson, François Grize, Marco Tomassini, and Alessandro EP Villa. 2005. Dynamics of pruning in simulated large-scale spiking neural networks. Biosystems 79, 1-3 (2005), 11–20.
[13]
Steven A Janowsky. 1989. Pruning versus clipping in neural networks. Physical Review A 39, 12 (1989), 6600.
[14]
Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical Report.
[15]
Yann LeCun, John S Denker, and Sara A Solla. 1990. Optimal brain damage. In Advances in neural information processing systems. 598–605.
[16]
Chen Liu, Guillaume Bellec, Bernhard Vogginger, David Kappel, Johannes Partzsch, Felix Neumärker, Sebastian Höppner, Wolfgang Maass, Steve B Furber, Robert Legenstein, 2018. Memory-efficient Deep Learning on a SpiNNaker 2 prototype. Frontiers in neuroscience 12 (2018).
[17]
Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 6197 (2014), 668–673.
[18]
Maryam Parsa, Catherine Schuman, Prasanna Date, Derek Rose, Bill Kay, J. Parker Mitchell, Steven Young, Ryan Dellana, William Severa, Thomas Potok, and Kaushik Roy. [n.d.]. Hyperparameter Optimization in Binary Communication Networks for Neuromorphic Deployment. International Joint Conference on Neural Networks, to appear.
[19]
Nitin Rathi, Priyadarshini Panda, and Kaushik Roy. 2018. STDP-based pruning of connections and weight quantization in spiking neural networks for energy-efficient recognition. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38, 4(2018), 668–677.
[20]
Russell Reed. 1993. Pruning algorithms-a survey. IEEE transactions on Neural Networks 4, 5 (1993), 740–747.
[21]
Oliver Rhodes, Petruţ A Bogdan, Christian Brenninkmeijer, Simon Davidson, Donal Fellows, Andrew Gait, David R Lester, Mantas Mikaitis, Luis A Plana, Andrew GD Rowley, 2018. sPyNNaker: a software package for running PyNN simulations on SpiNNaker. Frontiers in neuroscience 12 (2018), 816.
[22]
Catherine D Schuman, Thomas E Potok, Robert M Patton, J Douglas Birdwell, Mark E Dean, Garrett S Rose, and James S Plank. 2017. A survey of neuromorphic computing and neural networks in hardware. arXiv preprint arXiv:1705.06963(2017).
[23]
William Severa, Craig M Vineyard, Ryan Dellana, Stephen J Verzi, and James B Aimone. 2019. Training deep neural networks for binary communication with the Whetstone method. Nature Machine Intelligence(2019), 1.
[24]
Yuhan Shi, Leon Nguyen, Sangheon Oh, Xin Liu, and Duygu Kuzum. 2019. A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications. Frontiers in neuroscience 13 (2019), 405.
[25]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556(2014).
[26]
Craig M Vineyard, Ryan Dellana, James B Aimone, Fredrick Rothganger, and William M Severa. 2019. Low-Power Deep Learning Inference using the SpiNNaker Neuromorphic Platform. In Proceedings of the 7th Annual Neuro-inspired Computational Elements Workshop. 1–7.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICONS 2020: International Conference on Neuromorphic Systems 2020
July 2020
186 pages
ISBN:9781450388511
DOI:10.1145/3407197
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 July 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Binary Networks
  2. Deep Learning
  3. Pruning
  4. Whetstone

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

ICONS 2020

Acceptance Rates

Overall Acceptance Rate 13 of 22 submissions, 59%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 127
    Total Downloads
  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media