Abstract
Nowadays, deep learning is prevalent in many fields. The primary workload in deep learning is the General Matrix-matrix Multiplication (GEMM). The TPU is the state-of-the-art GEMM accelerator. However, it does not support sparsity. In this paper, we design and implement the SADD, a systolic array accelerator that supports sparsity and dynamic dataflow. First, we propose the Group-Structure-Maintained Compression (GSMC). Then, based on the GSMC, we propose the Sparsity-supported Weight Stationary Dataflow (SWS) and Sparsity-supported Input Stationary Dataflow (SIS) to exploit the sparsity for systolic arrays. Finally, by combining the SIS and SWS, we propose the Sparsity-supported Dynamic Dataflow (SDD), which can change dataflow according to the computing environment. The experimental results show that the SDD in the SADD perform efficiently in any computing environment. When running the AlexNet, the performance of the SADD is \(2 \times\) better than the TPU. In addition, the SADD brings only a small additional hardware overhead.
This work is supported in part by the National Key R&D Project No. 2021YFB0300300, the NSFC (62172430, 61872374), the NSF of Hunan Province (2021JJ10052, 2022JJ10064).
B. Wang and S. Ma—Contributed equally to this research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Nguyen, G., et al.: Machine learning and deep learning frameworks and libraries for large-scale data mining: a survey. Artif. Intell. Rev. 52(1), 77–124 (2019). https://doi.org/10.1007/s10462-018-09679-z
Yang, S., Wang, Y., Chu, X.: A survey of deep learning techniques for neural machine translation. arXiv preprint arXiv:2002.07526 (2020)
Acun, B., Murphy, M., Wang, X., Nie, J., Wu, C., Hazelwoodet, K.: Understanding training efficiency of deep learning recommendation models at scale. In: IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 802–814. IEEE (2021)
AI and Compute. https://openai.com/blog/ai-and-compute/ Accessed 01 May 2022
Qin, E., et al.: Sigma: a sparse and irregular gemm accelerator with flexible interconnects for dnn training. In: HPCA 2020, pp. 28–70. IEEE (2020)
Jouppi, N., Young, C., Patil, N., Patterson, D.: Motivation for and evaluation of the first tensor processing unit. IEEE Micro 38(3), 10–19 (2018)
Samajdar, A., Zhu, Y., Whatmough, P., Mattina, M., Krishna, T.: A systematic methodology for characterizing scalability of DNN accelerators using SCALE-sim. In: Proceedings of the IEEE In-ternational Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 58–68. IEEE (2020)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)
Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. Adv. Neural Inf. Process. Syst. 28, 1135–1143 (2015)
Albericio, J., Judd, P., Hetherington, T., Aamodt, T., Jerger, N.E., Moshovos, A.: Cnvlutin: ineffectual-neuron-free deep neural network computing. ACM SIGARCH Comput. Archit. News 44(3), 1–13 (2016)
Yazdanbakhsh, A., Samadi, K., Kim, N.S., Esmaeilzadeh, H.: GANAX: a unified MIMD-SIMD acceleration for generative adversarial networks. In: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), pp. 650–661. IEEE (2018)
Liu, X., Chen, Y., Hao, C., Dhar, A., Chen, D.: WinoCNN: kernel sharing Winograd systolic array for efficient convolutional neural network acceleration on FPGAs. In: 2021 IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP), pp. 258–265. IEEE (2021)
Wang, J., Guo, L., Cong, J.: AutoSA: a polyhedral compiler for high-performance systolic arrays on FPGA. In: The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 93–104. ACM (2021)
Xu, R., Ma, S., Wang, Y., Guo, Y.: CMSA: configurable multi-directional systolic array for convolutional neural networks. In: 2020 IEEE 38th International Conference on Computer Design (ICCD), pp. 494–497. IEEE (2020)
He, X., et al.: Sparse-TPU: adapting systolic arrays for sparse matrices. In: Proceedings of the 34th ACM International Conference on Supercomputing, pp. 1–12. ACM (2020)
Yang, J., Fu, W., Cheng, X., Ye, X., Dai, P., Zhao, W.: S2 engine: a novel systolic architecture for sparse convolutional neural networks. IEEE Trans. Comput. 71(6), 1440–1452 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 IFIP International Federation for Information Processing
About this paper
Cite this paper
Wang, B., Ma, S., Liu, Z., Huang, L., Yuan, Y., Dai, Y. (2022). SADD: A Novel Systolic Array Accelerator with Dynamic Dataflow for Sparse GEMM in Deep Learning. In: Liu, S., Wei, X. (eds) Network and Parallel Computing. NPC 2022. Lecture Notes in Computer Science, vol 13615. Springer, Cham. https://doi.org/10.1007/978-3-031-21395-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-21395-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21394-6
Online ISBN: 978-3-031-21395-3
eBook Packages: Computer ScienceComputer Science (R0)