[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

SADD: A Novel Systolic Array Accelerator with Dynamic Dataflow for Sparse GEMM in Deep Learning

  • Conference paper
  • First Online:
Network and Parallel Computing (NPC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13615))

Included in the following conference series:

Abstract

Nowadays, deep learning is prevalent in many fields. The primary workload in deep learning is the General Matrix-matrix Multiplication (GEMM). The TPU is the state-of-the-art GEMM accelerator. However, it does not support sparsity. In this paper, we design and implement the SADD, a systolic array accelerator that supports sparsity and dynamic dataflow. First, we propose the Group-Structure-Maintained Compression (GSMC). Then, based on the GSMC, we propose the Sparsity-supported Weight Stationary Dataflow (SWS) and Sparsity-supported Input Stationary Dataflow (SIS) to exploit the sparsity for systolic arrays. Finally, by combining the SIS and SWS, we propose the Sparsity-supported Dynamic Dataflow (SDD), which can change dataflow according to the computing environment. The experimental results show that the SDD in the SADD perform efficiently in any computing environment. When running the AlexNet, the performance of the SADD is \(2 \times\) better than the TPU. In addition, the SADD brings only a small additional hardware overhead.

This work is supported in part by the National Key R&D Project No. 2021YFB0300300, the NSFC (62172430, 61872374), the NSF of Hunan Province (2021JJ10052, 2022JJ10064).

B. Wang and S. Ma—Contributed equally to this research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 59.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 74.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Nguyen, G., et al.: Machine learning and deep learning frameworks and libraries for large-scale data mining: a survey. Artif. Intell. Rev. 52(1), 77–124 (2019). https://doi.org/10.1007/s10462-018-09679-z

    Article  Google Scholar 

  2. Yang, S., Wang, Y., Chu, X.: A survey of deep learning techniques for neural machine translation. arXiv preprint arXiv:2002.07526 (2020)

  3. Acun, B., Murphy, M., Wang, X., Nie, J., Wu, C., Hazelwoodet, K.: Understanding training efficiency of deep learning recommendation models at scale. In: IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 802–814. IEEE (2021)

    Google Scholar 

  4. AI and Compute. https://openai.com/blog/ai-and-compute/ Accessed 01 May 2022

  5. Qin, E., et al.: Sigma: a sparse and irregular gemm accelerator with flexible interconnects for dnn training. In: HPCA 2020, pp. 28–70. IEEE (2020)

    Google Scholar 

  6. Jouppi, N., Young, C., Patil, N., Patterson, D.: Motivation for and evaluation of the first tensor processing unit. IEEE Micro 38(3), 10–19 (2018)

    Article  Google Scholar 

  7. Samajdar, A., Zhu, Y., Whatmough, P., Mattina, M., Krishna, T.: A systematic methodology for characterizing scalability of DNN accelerators using SCALE-sim. In: Proceedings of the IEEE In-ternational Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 58–68. IEEE (2020)

    Google Scholar 

  8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)

    Google Scholar 

  9. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. Adv. Neural Inf. Process. Syst. 28, 1135–1143 (2015)

    Google Scholar 

  10. Albericio, J., Judd, P., Hetherington, T., Aamodt, T., Jerger, N.E., Moshovos, A.: Cnvlutin: ineffectual-neuron-free deep neural network computing. ACM SIGARCH Comput. Archit. News 44(3), 1–13 (2016)

    Article  Google Scholar 

  11. Yazdanbakhsh, A., Samadi, K., Kim, N.S., Esmaeilzadeh, H.: GANAX: a unified MIMD-SIMD acceleration for generative adversarial networks. In: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), pp. 650–661. IEEE (2018)

    Google Scholar 

  12. Liu, X., Chen, Y., Hao, C., Dhar, A., Chen, D.: WinoCNN: kernel sharing Winograd systolic array for efficient convolutional neural network acceleration on FPGAs. In: 2021 IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP), pp. 258–265. IEEE (2021)

    Google Scholar 

  13. Wang, J., Guo, L., Cong, J.: AutoSA: a polyhedral compiler for high-performance systolic arrays on FPGA. In: The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 93–104. ACM (2021)

    Google Scholar 

  14. Xu, R., Ma, S., Wang, Y., Guo, Y.: CMSA: configurable multi-directional systolic array for convolutional neural networks. In: 2020 IEEE 38th International Conference on Computer Design (ICCD), pp. 494–497. IEEE (2020)

    Google Scholar 

  15. He, X., et al.: Sparse-TPU: adapting systolic arrays for sparse matrices. In: Proceedings of the 34th ACM International Conference on Supercomputing, pp. 1–12. ACM (2020)

    Google Scholar 

  16. Yang, J., Fu, W., Cheng, X., Ye, X., Dai, P., Zhao, W.: S2 engine: a novel systolic architecture for sparse convolutional neural networks. IEEE Trans. Comput. 71(6), 1440–1452 (2021)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheng Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, B., Ma, S., Liu, Z., Huang, L., Yuan, Y., Dai, Y. (2022). SADD: A Novel Systolic Array Accelerator with Dynamic Dataflow for Sparse GEMM in Deep Learning. In: Liu, S., Wei, X. (eds) Network and Parallel Computing. NPC 2022. Lecture Notes in Computer Science, vol 13615. Springer, Cham. https://doi.org/10.1007/978-3-031-21395-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21395-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21394-6

  • Online ISBN: 978-3-031-21395-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics