Abstract
Sparse Matrix-Multivector (SpMM) multiplication is a key kernel in deep learning models and scientific computing applications. However, achieving high performance for SpMM is challenging due to the irregular distribution of non-zero elements and memory access patterns. Therefore, several sparse matrix reordering algorithms have been developed to improve data locality for SpMM. However, existing approaches for reordering sparse matrix have not considered block sparsity during the reordering process. In this paper, we present a novel algorithm for sparse matrix reordering that considers block sparsity to enhance data locality for SpMM on Tensor Cores. To alleviate the main bottleneck of reordering, which involves substantial computations for measuring similarity between rows, we develop an efficient GPU implementation by adapting dynamic parallelism and synchronization schemes. Experimental results on a large number of sparse matrices demonstrate the effectiveness of our reordering algorithm and the benefits of leveraging Tensor Cores for SpMM. Our approach achieves a significant performance improvement over various state-of-the-art SpMM implementations.
E. Lee and Y. Han—These authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ahrens, P., Boman, E.G.: On optimal partitioning for sparse matrices in variable block row format. arXiv preprint arXiv:2005.12414 (2020)
Buluç, A., Fineman, J.T., Frigo, M., Gilbert, J.R., Leiserson, C.E.: Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks. In: Proceedings of the Twenty-First Annual Symposium on Parallelism in Algorithms and Architectures, pp. 233–244 (2009)
Choi, J.W., Singh, A., Vuduc, R.W.: Model-driven autotuning of sparse matrix-vector multiply on GPUs. ACM Sigplan Notices 45(5), 115–126 (2010)
Davis, T.A., Hu, Y.: The university of florida sparse matrix collection. ACM Trans. Math. Softw. (TOMS) 38(1), 1–25 (2011)
Gale, T., Zaharia, M., Young, C., Elsen, E.: Sparse GPU kernels for deep learning. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–14. IEEE (2020)
Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., Peste, A.: Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. 22(1), 10882–11005 (2021)
Hong, C., Sukumaran-Rajam, A., Nisa, I., Singh, K., Sadayappan, P.: Adaptive sparse tiling for sparse matrix multiplication. In: Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming, pp. 300–314 (2019)
Huang, G., Dai, G., Wang, Y., Yang, H.: Ge-spmm: general-purpose sparse matrix-matrix multiplication on GPUs for graph neural networks. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–12. IEEE (2020)
Jiang, P., Hong, C., Agrawal, G.: A novel data transformation and execution strategy for accelerating sparse matrix multiplication on GPUs. In: Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 376–388 (2020)
Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12 (2017)
Labini, P.S., Bernaschi, M., Nutt, W., Silvestri, F., Vella, F.: Blocking sparse matrices to leverage dense-specific multiplication. In: 2022 IEEE/ACM Workshop on Irregular Applications: Architectures and Algorithms (IA3), pp. 19–24. IEEE (2022)
Langr, D., Tvrdik, P.: Evaluation criteria for sparse matrix storage formats. IEEE Trans. Parallel Distrib. Syst. 27(2), 428–440 (2015)
Lee, E., Han, Y., Moon, G.E.: Accelerated block-sparsity-aware matrix reordering for leveraging tensor cores in sparse matrix-multivector multiplication. In: 30th International European Conference on Parallel and Distributed Computing. Zenodo (2024). https://doi.org/10.5281/zenodo.11579181
Nvidia: Nvidia A100 tensor core GPU architecture. Technical report (2020). https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf
Nvidia: Accelerating matrix multiplication with block sparse format and nvidia tensor cores. Technical report (2021). https://developer.nvidia.com/blog/accelerating-matrix-multiplication-with-block-sparse-format-and-nvidia-tensor-cores/
Nvidia: The api reference guide for cublas, the cuda basic linear algebra subroutine library. Technical report (2024). https://docs.nvidia.com/cuda/cublas/
Nvidia: The api reference guide for cusparse, the cuda sparse matrix library. Technical report (2024). https://docs.nvidia.com/cuda/cusparse/index.html
Nvidia: Cuda c++ programming guide. Technical report (2024). https://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf
Saad, Y.: Finding exact and approximate block structures for ilu preconditioning. SIAM J. Sci. Comput. 24(4), 1107–1123 (2003)
Wang, Y., Feng, B., Wang, Z., Huang, G., Ding, Y.: TC-GNN: bridging sparse GNN computation and dense tensor cores on GPUs. In: 2023 USENIX Annual Technical Conference (USENIX ATC 2023), pp. 149–164 (2023)
Acknowledgements and Artifact Availability
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-RS-2024-00359076 and NRF-2021R1G1A1092597). The code is available in the Zenodo repository [13].
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lee, E., Han, Y., Moon, G.E. (2024). Accelerated Block-Sparsity-Aware Matrix Reordering for Leveraging Tensor Cores in Sparse Matrix-Multivector Multiplication. In: Carretero, J., Shende, S., Garcia-Blas, J., Brandic, I., Olcoz, K., Schreiber, M. (eds) Euro-Par 2024: Parallel Processing. Euro-Par 2024. Lecture Notes in Computer Science, vol 14803. Springer, Cham. https://doi.org/10.1007/978-3-031-69583-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-69583-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-69582-7
Online ISBN: 978-3-031-69583-4
eBook Packages: Computer ScienceComputer Science (R0)