[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Skyway: Accelerate Graph Applications with a Dual-Path Architecture and Fine-Grained Data Management

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Graph processing is a vital component of many AI and big data applications. However, due to its poor locality and complex data access patterns, graph processing is also a known performance killer of AI and big data applications. In this work, we propose to enhance graph processing applications by leveraging fine-grained memory access patterns with a dual-path architecture on top of existing software-based graph optimizations. We first identify that memory accesses to the offset, edge, and state array have distinct locality and impact on performance. We then introduce the Skyway architecture, which consists of two primary components: 1) a dedicated direct data path between the core and memory to transfer state array elements efficiently, and 2) a data-type aware fine-grained memory-side row buffer hardware for both the newly designed direct data path and the regular memory hierarchy data path. The proposed Skyway architecture is able to improve the overall performance by reducing the memory access interference and improving data access efficiency with a minimal overhead. We evaluate Skyway on a set of diverse algorithms using large real-world graphs. On a simulated four-core system, Skyway improves the performance by 23% on average over the best-performing graph-specialized hardware optimizations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Fan W F. Graph pattern matching revised for social network analysis. In Proc. the 15th International Conference on Database Theory, Mar. 2012, pp.8–21. DOI: https://doi.org/10.1145/2274576.2274578.

    Chapter  Google Scholar 

  2. Kwak H, Lee C, Park H, Moon S. What is Twitter, a social network or a news media? In Proc. the 19th International Conference on World Wide Web, Apr. 2010, pp.591–600. DOI: https://doi.org/10.1145/1772690.1772751.

  3. Tang L, Liu H. Graph mining applications to social network analysis. In Managing and Mining Graph Data, Aggarwal C C, Wang H X (eds.), Springer, 2010, pp.487–513. DOI: https://doi.org/10.1007/978-1-4419-6045-0_16.

    Chapter  Google Scholar 

  4. Caetano T S, McAuley J J, Cheng L, Le Q V, Smola A J. Learning graph matching. IEEE Trans. Pattern Analysis and Machine Intelligence, 2009, 31(6): 1048–1058. DOI: https://doi.org/10.1109/TPAMI.2009.28.

    Article  Google Scholar 

  5. Navlakha S, Schatz M C, Kingsford C. Revealing biological modules via graph summarization. Journal of Computational Biology, 2009, 16(2): 253–264. DOI: https://doi.org/10.1089/cmb.2008.11TT.

    Article  Google Scholar 

  6. Han S, Liu X Y, Mao H Z, Pu J, Pedram A, Horowitz M A, Dally W J. EIE: Efficient inference engine on compressed deep neural network. ACM SIGARCH Computer Architecture News, 2016, 44(3): 243–254. DOI: https://doi.org/10.1145/3007787.3001163.

    Article  Google Scholar 

  7. Mukkara A, Beckmann N, Abeydeera M, Ma X S, Sanchez D. Exploiting locality in graph analytics through hardware-accelerated traversal scheduling. In Proc. the 51st Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2018. DOI: https://doi.org/10.1109/MICRO.2018.00010.

    Google Scholar 

  8. Arai J, Shiokawa H, Yamamuro T, Onizuka M, Iwamura S. Rabbit order: Just-in-time parallel reordering for fast graph analysis. In Proc. the 2016 IEEE International Parallel and Distributed Processing Symposium, May 2016, pp.22–31. DOI: https://doi.org/10.1109/IPDPS.2016.110.

    Google Scholar 

  9. Balaji V, Lucia B. When is graph reordering an optimization? Studying the effect of lightweight graph reordering across applications and input graphs. In Proc. the 2018 IEEE International Symposium on Workload Characterization, Sept. 30–Oct. 2, 2018, pp.203–214. DOI: https://doi.org/10.1109/IISWC.2018.8573478.

    Google Scholar 

  10. Faldu P, Diamond J, Grot B. A closer look at lightweight graph reordering. In Proc. the 2019 IEEE International Symposium on Workload Characterization, Nov. 2019. DOI: https://doi.org/10.1109/IISWC47752.2019.9041948.

    Google Scholar 

  11. Lakhotia K, Singapura S, Kannan R, Prasanna V. Re-CALL: Reordered cache aware locality based graph processing. In Proc. the 24th International Conference on High Performance Computing, Dec. 2017, pp.273–282. DOI: https://doi.org/10.1109/HiPC.2017.00039.

    Google Scholar 

  12. Wei H, Yu J X, Lu C, Lin X M. Speedup graph processing by graph ordering. In Proc. the 2016 International Conference on Management of Data, Jun. 2016, pp.1813–1828. DOI: https://doi.org/10.1145/2882903.2915220.

    Chapter  Google Scholar 

  13. Zhang Y M, Kiriansky V, Mendis C, Amarasinghe S, Zaharia M. Making caches work for graph analytics. In Proc. the 2017 IEEE International Conference on Big Data, Dec. 2017, pp.293–302. DOI: https://doi.org/10.1109/BigData.2017.8257937.

    Chapter  Google Scholar 

  14. Zou M, Zhang M Z, Wang R J, Sun X H, Ye X C, Fan D R, Tang Z M. Accelerating graph processing with lightweight learning-based data reordering. IEEE Computer Architecture Letters, 2022, 21(1): 5–8. DOI: https://doi.org/10.1109/LCA.2022.3151087.

    Article  Google Scholar 

  15. Balaji V, Crago N, Jaleel A, Lucia B. P-OPT: Practical optimal cache replacement for graph analytics. In Proc. the 2021 IEEE International Symposium on High-Performance Computer Architecture, Feb. 27-/Mar. 3, 2021, pp.668–681. DOI: https://doi.org/10.1109/HPCA51647.2021.00062.

    Google Scholar 

  16. Faldu P, Diamond J, Grot B. Domain-specialized cache management for graph analytics. In Proc. the 2020 IEEE International Symposium on High Performance Computer Architecture, Feb. 2020, pp.234–248. DOI: https://doi.org/10.1109/HPCA47549.2020.00028.

    Google Scholar 

  17. Mukkara A, Beckmann N, Sanchez D. PHI: Architectural support for synchronization- and bandwidth-efficient commutative scatter updates. In Proc. the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2019, pp.1009–1022. DOI: https://doi.org/10.1145/3352460.3358254.

    Chapter  Google Scholar 

  18. Rahman S, Abu-Ghazaleh N, Gupta R. GraphPulse: An event-driven hardware accelerator for asynchronous graph processing. In Proc. the 53rd Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2020, pp.908–921. DOI: https://doi.org/10.1109/MICRO50266.2020.00078.

    Google Scholar 

  19. Yan M Y, Hu X, Li S C, Basak A, Li H, Ma X, Akgun I, Feng Y J, Gu P, Deng L, Ye X C, Zhang Z M, Fan D R, Xie Y. Alleviating irregularity in graph analytics acceleration: A hardware/software co-design approach. In Proc. the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2019, pp.615–628. DOI: https://doi.org/10.1145/3352460.3358318.

    Chapter  Google Scholar 

  20. Zhang D, Ma X Y, Thomson M, Chiou D. Minnow: Lightweight offload engines for worklist management and worklist-directed prefetching. ACM SIGPLAN Notices, 2018, 53(2): 593–607. DOI: https://doi.org/10.1145/3296957.3173197.

    Article  Google Scholar 

  21. Zhang Y, Liao X F, Jin H, He L G, He B S, Liu H K, Gu L. DepGraph: A dependency-driven accelerator for efficient iterative graph processing. In Proc. the 2021 IEEE International Symposium on High-Performance Computer Architecture, Feb. 27–Mar. 3, 2021, pp.371–384. DOI: https://doi.org/10.1109/HPCA51647.2021.00039.

    Google Scholar 

  22. Zou M, Yan M Y, Li W M, Tang Z M, Ye X C, Fan D R. GEM: Execution-aware cache management for graph analytics. In Proc. the 22nd International Conference on Algorithms and Architectures for Parallel Processing, Oct. 2022, pp.273–292. DOI: https://doi.org/10.1007/978-3-031-22677-9_15.

    Google Scholar 

  23. Maass S, Min C, Kashyap S, Kang W, Kumar M, Kim T. Mosaic: Processing a trillion-edge graph on a single machine. In Proc. the 12th European Conference on Computer Systems, Apr. 2017, pp.527–543. DOI: https://doi.org/10.1145/3064176.3064191.

    Chapter  Google Scholar 

  24. Shun J L, Blelloch G E. Ligra: A lightweight graph processing framework for shared memory. In Proc. the 18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Feb. 2013, pp.135–146. DOI: https://doi.org/10.1145/2442516.2442530.

    Google Scholar 

  25. Beamer S, Asanović K, Patterson D. The GAP benchmark suite. arXiv: 1508.03619, 2015. https://doi.org/10.48550/arXiv.1508.03619, Jan. 2024.

  26. Kyrola A, Blelloch G, Guestrin C. GraphChi: Large-scale graph computation on just a PC. In Proc. the 10th USENIX Symposium on Operating Systems Design and Implementation, Oct. 2012, pp.31–46.

    Google Scholar 

  27. Sundaram N, Satish N, Patwary M M A, Dulloor S R, Vadlamudi S G, Das D, Dubey P. GraphMat: High performance graph analytics made productive. Proceedings of the VLDB Endowment, 2015, 8(11): 1214–1225. DOI: https://doi.org/10.14778/2809974.2809983.

    Article  Google Scholar 

  28. Faloutsos M, Faloutsos P, Faloutsos C. On power-law relationships of the Internet topology. In The Structure and Dynamics of Networks, Newman M, Barabási A L, Watts D J (eds.), Princeton University Press, 2006, pp.195–206. DOI: https://doi.org/10.1515/9781400841356.195.

    Google Scholar 

  29. Gonzalez J E, Low Y, Gu H J, Bickson D, Guestrin C. PowerGraph: Distributed graph-parallel computation on natural graphs. In Proc. the 10th USENIX Symposium on Operating Systems Design and Implementation, Oct. 2012, pp.17–30.

    Google Scholar 

  30. Jiang L, Chen L S, Qiu J. Performance characterization of multi-threaded graph processing applications on many-integrated-core architecture. In Proc. the 2018 IEEE International Symposium on Performance Analysis of Systems and Software, Apr. 2018, pp.199–208. DOI: https://doi.org/10.1109/ISPASS.2018.00033.

    Google Scholar 

  31. Sanchez D, Kozyrakis C. ZSim: Fast and accurate microarchitectural simulation of thousand-core systems. ACM SIGARCH Computer Architecture News, 2013, 41(3): 475–486. DOI: https://doi.org/10.1145/2508148.2485963.

    Article  Google Scholar 

  32. Li S, Yang Z Y, Reddy D, Srivastava A, Jacob B. DRAMsim3: A cycle-accurate, thermal-capable DRAM simulator. IEEE Computer Architecture Letters, 2020, 19(2): 106–109. DOI: https://doi.org/10.1109/LCA.2020.2973991.

    Article  Google Scholar 

  33. Basak A, Li S C, Hu X, Oh S M, Xie X F, Zhao L, Jiang X W, Xie Y. Analysis and optimization of the memory hierarchy for graph processing workloads. In Proc. the 2019 IEEE International Symposium on High Performance Computer Architecture, Feb. 2019, pp.373–386. DOI: https://doi.org/10.1109/HPCA.2019.00051.

    Google Scholar 

  34. Rixner S, Dally W J, Kapasi U J, Mattson P, Owens J D. Memory access scheduling. ACM SIGARCH Computer Architecture News, 2000, 28(2): 128–138. DOI: https://doi.org/10.1145/342001.339668.

    Article  Google Scholar 

  35. Hassan H, Patel M, Kim J S, Yaglikci A G, Vijaykumar N, Ghiasi N M, Ghose S, Mutlu O. CROW: A low-cost substrate for improving DRAM performance, energy efficiency, and reliability. In Proc. the 46th International Symposium on Computer Architecture, Jun. 2019, pp.129–142. DOI: https://doi.org/10.1145/3307650.3322231.

    Chapter  Google Scholar 

  36. Beamer S, Asanovic K, Patterson D. Direction-optimizing breadth-first search. In Proc. the 2012 International Conference on High Performance Computing, Networking, Storage and Analysis, Nov. 2012. DOI: https://doi.org/10.1109/SC.2012.50.

    Google Scholar 

  37. Madduri K, Ediger D, Jiang K, Bader D A, Chavarria-Miranda D. A faster parallel algorithm and efficient multithreaded implementations for evaluating betweenness centrality on massive datasets. In Proc. the 2009 IEEE International Symposium on Parallel & Distributed Processing, May 2009. DOI: https://doi.org/10.1109/IPDPS.2009.5161100.

    Google Scholar 

  38. Sutton M, Ben-Nun T, Barak A. Optimizing parallel graph connectivity computation via subgraph sampling. In Proc. the 2018 IEEE International Parallel and Distributed Processing Symposium, May 2018, pp.12–21. DOI: https://doi.org/10.1109/IPDPS.2018.00012.

    Google Scholar 

  39. Zhang Y M, Brahmakshatriya A, Chen X Y, Dhulipala L, Kamil S, Amarasinghe S, Shun J. Optimizing ordered graph algorithms with Graphit. In Proc. the 18th ACM/IEEE International Symposium on Code Generation and Optimization, Feb. 2020, pp.158–170. DOI: https://doi.org/10.1145/3368826.3377909.

    Google Scholar 

  40. Auer S, Bizer C, Kobilarov G, Lehmann J, Cyganiak R, Ives Z. DBpedia: A nucleus for a web of open data. In Proc. the 6th International Semantic Web Conference on the Semantic Web, Nov. 2007, pp.722–735. DOI: https://doi.org/10.1007/978-3-540-76298-0_52.

    Google Scholar 

  41. Lehmberg O, Meusel R, Bizer C. Graph structure in the web: Aggregated by pay-level domain. In Proc. the 2014 ACM Conference on Web Science, Jun. 2014, pp.119–128. DOI: https://doi.org/10.1145/2615569.2615674.

    Chapter  Google Scholar 

  42. Kunegis J. KONECT: The Koblenz network collection. In Proc. the 22nd International Conference on World Wide Web, May 2013, pp.1343–1350. DOI: https://doi.org/10.1145/2487788.2488173.

    Chapter  Google Scholar 

  43. Cha M, Haddadi H, Benevenuto F, Gummadi K. Measuring user influence in Twitter: The million follower fallacy. In Proc. the 2010 International AAAI Conference on Web and Social Media, May 2010, pp.10–17. DOI: https://doi.org/10.1609/icwsm.v4i1.14033.

    Google Scholar 

  44. Davis T A, Hu Y F. The university of Florida sparse matrix collection. ACM Trans. Mathematical Software, 2011, 38(1): Article No. 1. DOI: https://doi.org/10.1145/2049662.2049663.

  45. Wang Y H, Orosa L, Peng X J, Guo Y, Ghose S, Patel M, Kim J S, Luna J G, Sadrosadati M, Ghiasi N M, Mutlu O. FIGARO: Improving system performance via finegrained In-DRAM data relocation and caching. In Proc. the 53rd Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2020, pp.313–328. DOI: https://doi.org/10.1109/MICRO50266.2020.00036.

    Google Scholar 

  46. Lin B, Healy M B, Miftakhutdinov R, Emma P G, Patt Y. Duplicon cache: Mitigating off-chip memory bank and bank group conflicts via data duplication. In Proc. the 51st Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2018, pp.285–297. DOI: https://doi.org/10.1109/MICRO.2018.00031.

    Google Scholar 

  47. Muralimanohar N, Balasubramonian R, Jouppi N P. Optimizing NUCA organizations and wiring alternatives for large caches with CACTI 6.0. In Proc. the 40th Annual IEEE/ACM International Symposium on Microarchitecture, Dec. 2007, pp.3–14. DOI: https://doi.org/10.1109/MICRO.2007.33.

    Google Scholar 

  48. Jaleel A, Theobald K B, Steely S C, Emer J. High performance cache replacement using re-reference interval prediction (RRIP). In Proc. the 37th International Symposium on Computer Architecture, Jun. 2010, pp.60–71. DOI: https://doi.org/10.1145/1815961.1815971.

    Google Scholar 

  49. Gupta S, Gao H L, Zhou H Y. Adaptive cache bypassing for inclusive last level caches. In Proc. the 27th International Symposium on Parallel and Distributed Processing, May 2013, pp.1243–1253. DOI: https://doi.org/10.1109/IPDPS.2013.16.

    Google Scholar 

  50. Xiang L X, Chen T Z, Shi Q S, Hu W. Less reused filter: Improving L2 cache performance via filtering less reused lines. In Proc. the 23rd International Conference on Supercomputing, Jun. 2009, pp.68–79. DOI: https://doi.org/10.1145/1542275.1542290.

    Google Scholar 

  51. John L K, Subramanian A. Design and performance evaluation of a cache assist to implement selective caching. In Proc. the 1997 International Conference on Computer Design VLSI in Computers and Processors, Oct. 1997, pp.510–518. DOI: https://doi.org/10.1109/ICCD.1997.628916.

    Google Scholar 

  52. Malkowski K, Link G, Raghavan P, Irwin M J. Load miss prediction-exploiting power performance trade-offs. In Proc. the 2007 IEEE International Parallel and Distributed Processing Symposium, Mar. 2007. DOI: https://doi.org/10.1109/IPDPS.2007.370536.

    Google Scholar 

  53. Etsion Y, Feitelson D G. Exploiting core working sets to filter the L1 cache with random sampling. IEEE Trans. Computers, 2012, 61(11): 1535–1550. DOI: https://doi.org/10.1109/TC.2011.197.

    Article  MathSciNet  Google Scholar 

  54. Collins J D, Tullsen D M. Hardware identification of cache conflict misses. In Proc. the 32nd Annual ACM/IEEE International Symposium on Microarchitecture, Nov. 1999, pp.126–135. DOI: https://doi.org/10.1109/MICRO.1999.809450.

    Chapter  Google Scholar 

  55. Jalminger J, Stenstrom P. A novel approach to cache block reuse predictions. In Proc. the 2003 International Conference on Parallel Processing, Oct. 2003, pp.294–302. DOI: https://doi.org/10.1109/ICPP.2003.1240592.

    Google Scholar 

  56. Wang P Y, Wang J, Li C, Wang J Z, Zhu H J, Guo M Y. Grus: Toward unified-memory-efficient high-performance graph processing on GPU. ACM Trans. Architecture and Code Optimization, 2021, 18(2): Article No. 22. DOI: https://doi.org/10.1145/3444844.

  57. Wang P Y, Li C, Wang J, Wang T L, Zhang L, Leng J W, Chen Q, Guo M Y. Skywalker: Efficient alias-method-based graph sampling and random walk on GPUs. In Proc. the 30th International Conference on Parallel Architectures and Compilation Techniques, Sept. 2021, pp.304–317. DOI: https://doi.org/10.1109/PACT52795.2021.00029.

    Google Scholar 

  58. Sabet A H N, Zhao Z J, Gupta R. Subway: Minimizing data transfer during out-of-GPU-memory graph processing. In Proc. the 15th European Conference on Computer Systems, Apr. 2020, Article No. 12. DOI: https://doi.org/10.1145/3342195.3387537.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mo Zou  (邹 沫).

Ethics declarations

Conflict of Interest Xian-He Sun is an associate editor for Journal of Computer Science and Technology and was not involved in the editorial review of this article. All authors declare that there are no other competing interests.

Additional information

This work was supported in part by the U.S. National Science Foundation under Grant Nos. CCF-2008907 and CCF-2029014, the Chinese Academy of Sciences Project for Young Scientists in Basic Research under Grant No. YSBR-029, and the Chinese Academy of Sciences Project for Youth Innovation Promotion Association.

Mo Zou received her Bachelor’s degree in software engineering from Shandong University, Jinan, in 2017, and her Ph.D. degree in computer architecture from University of Chinese Academy of Sciences, Beijing, in 2023. She is now a postdoc researcher in Institute of Computing Technology, Chinese Academy of Sciences, Beijing. Her research interests include computer architecture and memory system, especially on domain-specific hardware optimization.

Ming-Zhe Zhang is currently an associate professor at the State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing. His research interests include NVM, memory-centric architecture, and domain specific accelerators.

Ru-Jia Wang received her Bachelor’s degree in automation from Zhejiang University, Hangzhou, in 2013, and her M.S. and Ph.D. degrees in electrical and computer engineering from the University of Pittsburgh, Pittsburgh, in 2015 and 2018, respectively. She is now an assistant professor in computer science at the Illinois Institute of Technology, Chicago. Her research interests are in the broader computer architecture and systems area, including scalable, secure, reliable, and high-performance memory systems and architectures.

Xian-He Sun is a University Distinguished Professor and the Ron Hochsprung Endowed Chair of the Department of Computer Science at the Illinois Institute of Technology (Illinois Tech), Chicago. Dr. Sun is an IEEE Fellow and is known for his memory-bounded speedup model, also called Sun-Ni’s Law, for scalable computing. His research interests include high-performance computing, memory and I/O systems, and performance evaluation and optimization. Dr. Sun is the Editor-in-Chief of IEEE Transactions on Parallel and Distributed Systems. Dr. Sun received the Golden Core Award from IEEE CS Society in 2017, the Overseas Outstanding Contributions Award from CCF in 2018. More information about Dr. Sun can be found at his website: www.cs.iit.edu/~scs/sun.

Xiao-Chun Ye received his Ph.D. degree in computer architecture from the Institute of Computing Technology, Chinese Academy of Sciences (CAS), Beijing, in 2010. Currently he is a professor and the director of the High-Throughput Computer Research Center in Institute of Computing Technology, CAS, Beijing. His main research interests include many-core processor architecture and graph accelerator.

Dong-Rui Fan received his Ph.D. degree in computer architecture from Institute of Computing Technology, Chinese Academy of Sciences (CAS), Beijing, in 2005. He is currently a professor and Ph.D. supervisor in Institute of Computing Technology, CAS, Beijing. His main research interests include high-throughput computer architecture and high performance computer architecture.

Zhi-Min Tang received his B.S. degree from the Department of Computer Science, Nanjing University, Nanjing, in 1985, and his Ph.D. degree from Institute of Computing Technology, Chinese Academy of Sciences (CAS), Beijing, in 1990, both in computer science. He is currently a professor in Institute of Computing Technology, CAS, Beijing. His research interests include high performance computer architecture, parallel processing, and VLSI design.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zou, M., Zhang, MZ., Wang, RJ. et al. Skyway: Accelerate Graph Applications with a Dual-Path Architecture and Fine-Grained Data Management. J. Comput. Sci. Technol. 39, 871–894 (2024). https://doi.org/10.1007/s11390-023-2939-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-023-2939-x

Keywords

Navigation