[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Accelerated Block-Sparsity-Aware Matrix Reordering for Leveraging Tensor Cores in Sparse Matrix-Multivector Multiplication

  • Conference paper
  • First Online:
Euro-Par 2024: Parallel Processing (Euro-Par 2024)

Abstract

Sparse Matrix-Multivector (SpMM) multiplication is a key kernel in deep learning models and scientific computing applications. However, achieving high performance for SpMM is challenging due to the irregular distribution of non-zero elements and memory access patterns. Therefore, several sparse matrix reordering algorithms have been developed to improve data locality for SpMM. However, existing approaches for reordering sparse matrix have not considered block sparsity during the reordering process. In this paper, we present a novel algorithm for sparse matrix reordering that considers block sparsity to enhance data locality for SpMM on Tensor Cores. To alleviate the main bottleneck of reordering, which involves substantial computations for measuring similarity between rows, we develop an efficient GPU implementation by adapting dynamic parallelism and synchronization schemes. Experimental results on a large number of sparse matrices demonstrate the effectiveness of our reordering algorithm and the benefits of leveraging Tensor Cores for SpMM. Our approach achieves a significant performance improvement over various state-of-the-art SpMM implementations.

E. Lee and Y. Han—These authors contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 87.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 64.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ahrens, P., Boman, E.G.: On optimal partitioning for sparse matrices in variable block row format. arXiv preprint arXiv:2005.12414 (2020)

  2. Buluç, A., Fineman, J.T., Frigo, M., Gilbert, J.R., Leiserson, C.E.: Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks. In: Proceedings of the Twenty-First Annual Symposium on Parallelism in Algorithms and Architectures, pp. 233–244 (2009)

    Google Scholar 

  3. Choi, J.W., Singh, A., Vuduc, R.W.: Model-driven autotuning of sparse matrix-vector multiply on GPUs. ACM Sigplan Notices 45(5), 115–126 (2010)

    Article  Google Scholar 

  4. Davis, T.A., Hu, Y.: The university of florida sparse matrix collection. ACM Trans. Math. Softw. (TOMS) 38(1), 1–25 (2011)

    MathSciNet  Google Scholar 

  5. Gale, T., Zaharia, M., Young, C., Elsen, E.: Sparse GPU kernels for deep learning. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–14. IEEE (2020)

    Google Scholar 

  6. Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., Peste, A.: Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. 22(1), 10882–11005 (2021)

    MathSciNet  Google Scholar 

  7. Hong, C., Sukumaran-Rajam, A., Nisa, I., Singh, K., Sadayappan, P.: Adaptive sparse tiling for sparse matrix multiplication. In: Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming, pp. 300–314 (2019)

    Google Scholar 

  8. Huang, G., Dai, G., Wang, Y., Yang, H.: Ge-spmm: general-purpose sparse matrix-matrix multiplication on GPUs for graph neural networks. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–12. IEEE (2020)

    Google Scholar 

  9. Jiang, P., Hong, C., Agrawal, G.: A novel data transformation and execution strategy for accelerating sparse matrix multiplication on GPUs. In: Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 376–388 (2020)

    Google Scholar 

  10. Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12 (2017)

    Google Scholar 

  11. Labini, P.S., Bernaschi, M., Nutt, W., Silvestri, F., Vella, F.: Blocking sparse matrices to leverage dense-specific multiplication. In: 2022 IEEE/ACM Workshop on Irregular Applications: Architectures and Algorithms (IA3), pp. 19–24. IEEE (2022)

    Google Scholar 

  12. Langr, D., Tvrdik, P.: Evaluation criteria for sparse matrix storage formats. IEEE Trans. Parallel Distrib. Syst. 27(2), 428–440 (2015)

    Article  Google Scholar 

  13. Lee, E., Han, Y., Moon, G.E.: Accelerated block-sparsity-aware matrix reordering for leveraging tensor cores in sparse matrix-multivector multiplication. In: 30th International European Conference on Parallel and Distributed Computing. Zenodo (2024). https://doi.org/10.5281/zenodo.11579181

  14. Nvidia: Nvidia A100 tensor core GPU architecture. Technical report (2020). https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf

  15. Nvidia: Accelerating matrix multiplication with block sparse format and nvidia tensor cores. Technical report (2021). https://developer.nvidia.com/blog/accelerating-matrix-multiplication-with-block-sparse-format-and-nvidia-tensor-cores/

  16. Nvidia: The api reference guide for cublas, the cuda basic linear algebra subroutine library. Technical report (2024). https://docs.nvidia.com/cuda/cublas/

  17. Nvidia: The api reference guide for cusparse, the cuda sparse matrix library. Technical report (2024). https://docs.nvidia.com/cuda/cusparse/index.html

  18. Nvidia: Cuda c++ programming guide. Technical report (2024). https://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf

  19. Saad, Y.: Finding exact and approximate block structures for ilu preconditioning. SIAM J. Sci. Comput. 24(4), 1107–1123 (2003)

    Article  MathSciNet  Google Scholar 

  20. Wang, Y., Feng, B., Wang, Z., Huang, G., Ding, Y.: TC-GNN: bridging sparse GNN computation and dense tensor cores on GPUs. In: 2023 USENIX Annual Technical Conference (USENIX ATC 2023), pp. 149–164 (2023)

    Google Scholar 

Download references

Acknowledgements and Artifact Availability

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-RS-2024-00359076 and NRF-2021R1G1A1092597). The code is available in the Zenodo repository [13].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gordon Euhyun Moon .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, E., Han, Y., Moon, G.E. (2024). Accelerated Block-Sparsity-Aware Matrix Reordering for Leveraging Tensor Cores in Sparse Matrix-Multivector Multiplication. In: Carretero, J., Shende, S., Garcia-Blas, J., Brandic, I., Olcoz, K., Schreiber, M. (eds) Euro-Par 2024: Parallel Processing. Euro-Par 2024. Lecture Notes in Computer Science, vol 14803. Springer, Cham. https://doi.org/10.1007/978-3-031-69583-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-69583-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-69582-7

  • Online ISBN: 978-3-031-69583-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics