Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleSeptember 2024
Skyway: Accelerate Graph Applications with a Dual-Path Architecture and Fine-Grained Data Management
Journal of Computer Science and Technology (JCST), Volume 39, Issue 4Pages 871–894https://doi.org/10.1007/s11390-023-2939-xAbstractGraph processing is a vital component of many AI and big data applications. However, due to its poor locality and complex data access patterns, graph processing is also a known performance killer of AI and big data applications. In this work, we ...
- research-articleApril 2024
HiHGNN: Accelerating HGNNs Through Parallelism and Data Reusability Exploitation
- Runzhen Xue,
- Dengke Han,
- Mingyu Yan,
- Mo Zou,
- Xiaocheng Yang,
- Duo Wang,
- Wenming Li,
- Zhimin Tang,
- John Kim,
- Xiaochun Ye,
- Dongrui Fan
IEEE Transactions on Parallel and Distributed Systems (TPDS), Volume 35, Issue 7Pages 1122–1138https://doi.org/10.1109/TPDS.2024.3394841Heterogeneous graph neural networks (HGNNs) have emerged as powerful algorithms for processing heterogeneous graphs (HetGs), widely used in many critical fields. To capture both structural and semantic information in HetGs, HGNNs first aggregate the ...
- research-articleAugust 2022
Alleviating datapath conflicts and design centralization in graph analytics acceleration
DAC '22: Proceedings of the 59th ACM/IEEE Design Automation ConferencePages 901–906https://doi.org/10.1145/3489517.3530524Previous graph analytics accelerators have achieved great improvement on throughput by alleviating irregular off-chip memory accesses. However, on-chip side datapath conflicts and design centralization have become the critical issues hindering further ...
- research-articleJuly 2022
Characterizing and Understanding HGNNs on GPUs
IEEE Computer Architecture Letters (ICAL), Volume 21, Issue 2Pages 69–72https://doi.org/10.1109/LCA.2022.3198281Heterogeneous graph neural networks (HGNNs) deliver powerful capacity in heterogeneous graph representation learning. The execution of HGNNs is usually accelerated by GPUs. Therefore, characterizing and understanding the execution pattern of HGNNs on GPUs ...
- research-articleJanuary 2022
Characterizing and Understanding Distributed GNN Training on GPUs
IEEE Computer Architecture Letters (ICAL), Volume 21, Issue 1Pages 21–24https://doi.org/10.1109/LCA.2022.3168067Graph neural network (GNN) has been demonstrated to be a powerful model in many domains for its effectiveness in learning over graphs. To scale GNN training for large graphs, a widely adopted approach is distributed training which accelerates training ...
- research-articleOctober 2019
Using concurrent relational logic with helpers for verifying the AtomFS file system
SOSP '19: Proceedings of the 27th ACM Symposium on Operating Systems PrinciplesPages 259–274https://doi.org/10.1145/3341301.3359644Concurrent file systems are pervasive but hard to correctly implement and formally verify due to nondeterministic interleavings. This paper presents AtomFS, the first formally-verified, fine-grained, concurrent file system, which provides linearizable ...