[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Cross-core Data Sharing for Energy-efficient GPUs

Published: 14 September 2024 Publication History

Abstract

Graphics Processing Units (GPUs) are the accelerator of choice in a variety of application domains, because they can accelerate massively parallel workloads and can be easily programmed using general-purpose programming frameworks such as CUDA and OpenCL. Each Streaming Multiprocessor (SM) contains an L1 data cache (L1D) to exploit the locality in data accesses. L1D misses are costly for GPUs for two reasons. First, L1D misses consume a lot of energy as they need to access the L2 cache (L2) via an on-chip network and the off-chip DRAM in case of L2 misses. Second, L1D misses impose performance overhead if the GPU does not have enough active warps to hide the long memory access latency. We observe that threads running on different SMs share 55% of the data they read from the memory. Unfortunately, as the L1Ds are in the non-coherent memory domain, each SM independently fetches data from the L2 or the off-chip memory into its L1D, even though the data may be currently available in the L1D of another SM. Our goal is to service L1D read misses via other SMs, as much as possible, to cut down costly accesses to the L2 or the off-chip DRAM. To this end, we propose a new data-sharing mechanism, called Cross-Core Data Sharing (CCDS). CCDS employs a predictor to estimate whether the required cache block exists in another SM. If the block is predicted to exist in another SM’s L1D, then CCDS fetches the data from the L1D that contain the block. Our experiments on a suite of 26 workloads show that CCDS improves average energy and performance by 1.30× and 1.20×, respectively, compared to the baseline GPU. Compared to the state-of-the-art data-sharing mechanism, CCDS improves average energy and performance by 1.37× and 1.11×, respectively.

References

[1]
2009. Whitepaper: NVIDIA’s Next Generation CUDA Compute Architecture: Fermi. Technical Report. NVIDIA.
[2]
2012. Whitepaper: NVIDIA’s Next Generation CUDA Compute Architecture: Kepler GK110. Technical Report. NVIDIA.
[3]
2014. Whitepaper: NVIDIA GeForce GTX980. Technical Report. NVIDIA.
[4]
2016. Whitepaper: NVIDIA GeForce GP100. Technical Report. NVIDIA.
[5]
Mohammad Abdel-Majeed and Murali Annavaram. 2013. Warped register file: A power efficient register file for GPGPUs. In HPCA.
[6]
Mohammad Abdel-Majeed, Alireza Shafaei, Hyeran Jeon, Massoud Pedram, and Murali Annavaram. 2017. Pilot register file: Energy efficient partitioned register file for GPUs. In HPCA.
[7]
Mohammad Abdel-Majeed, Daniel Wong, and Murali Annavaram. 2013. Warped gates: Gating aware scheduling and power gating for GPGPUs. In MICRO.
[8]
Mohammad Abdel-Majeed, Daniel Wong, Justin Kuang, and Murali Annavaram. 2016. Origami: Folding warps for energy efficient GPUs. In ICS.
[9]
Neha Agarwal, David Nellans, Eiman Ebrahimi, Thomas F. Wenisch, John Danskin, and Stephen W. Keckler. 2016. Selective GPU caches to eliminate CPU-GPU HW cache coherence. In HPCA.
[10]
Ehoud Ahronovitz, Jean-Pierre Aubert, and Christophe Fiorio. 1995. The star-topology: A topology for image analysis. In DGCI’05: 5th International Conference on Discrete Geometry for Computer Imagery.
[11]
Ahmed Al Maashri, Guangyu Sun, Xiangyu Dong, Vijay Narayanan, and Yuan Xie. 2009. 3D GPU architecture using cache stacking: Performance, cost, power and thermal analysis. In ICCD.
[12]
Akhil Arunkumar, Evgeny Bolotin, Benjamin Cho, Ugljesa Milic, Eiman Ebrahimi, Oreste Villa, Aamer Jaleel, Carole-Jean Wu, and David Nellans. 2017. MCM-GPU: Multi-chip-module GPUs for continued performance scalability. In ISCA.
[13]
Rachata Ausavarungnirun, Kevin Kai-Wei Chang, Lavanya Subramanian, Gabriel H. Loh, and Onur Mutlu. 2012. Staged memory scheduling: Achieving high performance and scalability in heterogeneous systems. In ISCA.
[14]
R. Ausavarungnirun, S. Ghose, O. Kayiran, G. H. Loh, C. R. Das, M. T. Kandemir, and O. Mutlu. 2015. Exploiting inter-warp heterogeneity to improve GPGPU performance. In PACT.
[15]
Rachata Ausavarungnirun, Saugata Ghose, Onur Kayiran, Gabriel H. Loh, Chita R. Das, Mahmut T. Kandemir, and Onur Mutlu. 2015. Exploiting inter-warp heterogeneity to improve GPGPU performance. In PACT.
[16]
Rachata Ausavarungnirun, Joshua Landgraf, Vance Miller, Saugata Ghose, Jayneel Gandhi, Christopher J. Rossbach, and Onur Mutlu. 2017. Mosaic: A GPU memory manager with application-transparent support for multiple page sizes. In MICRO.
[17]
Rachata Ausavarungnirun, Vance Miller, Joshua Landgraf, Saugata Ghose, Jayneel Gandhi, Adwait Jog, Christopher J. Rossbach, and Onur Mutlu. 2018. Mask: Redesigning the GPU memory hierarchy to support multi-application concurrency. In ASPLOS.
[18]
A. Bakhoda, G. L. Yuan, Wilson W. L. Fung, H. Wong, and Tor M. Aamodt. 2009. Analyzing CUDA workloads using a detailed GPU simulator. In ISPASS.
[19]
Balasubramanian, Raghuraman and Gangadhar, Vinay and Guo, Ziliang and Ho, Chen-Han and Joseph, Cherin and Menon, Jaikrishnan and Drumond, Mario Paulo and Paul, Robin and Prasad, Sharath and Valathol, Pradip and others. 2015. Enabling GPGPU low-level hardware explorations with MIAOW: An open-source RTL implementation of a GPGPU. ACM Trans. Arch. Code Optim. 12, 2 (2015), 21–1.
[20]
Burtscher, Martin and Nasre, Rupesh and Pingali, Keshav. 2012. A quantitative study of irregular programs on GPUs. In IISWC.
[21]
Shuai Che, Bradford M. Beckmann, Steven K. Reinhardt, and Kevin Skadron. 2013. Pannotia: Understanding irregular GPGPU graph applications. In IISWC.
[22]
Shuai Che, Michael Boyer, Jiayuan Meng, David Tarjan, Jeremy W. Sheaffer, Sang-Ha Lee, and Kevin Skadron. 2009. Rodinia: A benchmark suite for heterogeneous computing. In IISWC.
[23]
Li-Jhan Chen, Hsiang-Yun Cheng, Po-Han Wang, and Chia-Lin Yang. 2017. Improving GPGPU performance via cache locality aware thread block scheduling. CAL (2017).
[24]
Design Compiler. 2000. Synopsys inc.
[25]
Sina Darabi, Ehsan Yousefzadeh-Asl-Miandoab, Negar Akbarzadeh, Hajar Falahati, Pejman Lotfi-Kamran, Mohammad Sadrosadati, and Hamid Sarbazi-Azad. 2022. OSM: Off-chip shared memory for GPUs. IEEE Trans. Parallel Distrib. Syst. 33, 12 (2022), 3415–3429.
[26]
Sina Darabi, Negin Mahani, Hazhir Baxishi, Ehsan Yousefzadeh-Asl-Miandoab, Mohammad Sadrosadati, and Hamid Sarbazi-Azad. 2022. NURA: A framework for supporting non-uniform resource accesses in GPUs. Proc. ACM Meas. Anal. Comput. Syst. (2022).
[27]
Sina Darabi, Mohammad Sadrosadati, Negar Akbarzadeh, Joël Lindegger, Mohammad Hosseini, Jisung Park, Juan Gómez-Luna, Onur Mutlu, and Hamid Sarbazi-Azad. 2022. Morpheus: Extending the last level cache capacity in gpu systems using idle gpu core resources. In MICRO.
[28]
Saumay Dublish, Vijay Nagarajan, and Nigel Topham. 2016. Characterizing memory bottlenecks in GPGPU workloads. In IISWC.
[29]
Saumay Dublish, Vijay Nagarajan, and Nigel Topham. 2016. Cooperative caching for GPUs. ACM TOPC.
[30]
Saumay Dublish, Vijay Nagarajan, and Nigel Topham. 2017. Evaluating and mitigating bandwidth bottlenecks across the memory hierarchy in GPUs. In ISPASS.
[31]
Saumay Dublish, Vijay Nagarajan, and Nigel Topham. 2019. Poise: Balancing thread-level parallelism and memory system performance in GPUs using machine learning. In HPCA.
[32]
Hajar Falahati, Mania Abdi, Amirali Baniasadi, and Shaahin Hessabi. 2013. ISP: Using idle SMs in hardware-based prefetching. In CADS.
[33]
Hajar Falahati, Mania Abdi, Amirali Baniasadi, and Shaahin Hessabi. 2015. Power-efficient prefetching in GPGPUs. The Journal of Supercomputing 71 (2015), 2808–2829.
[34]
Hajar Falahati, Pejman Lotfi-Kamran, Mohammad Sadrosadati, and Hamid Sarbazi-Azad. 2018. ORIGAMI: A heterogeneous split architecture for in-memory acceleration of learning. arXiv preprint arXiv:1812.11473 (2018).
[35]
Hajar Falahati, Masoud Peyro, Hossein Amini, Mehran Taghian, Mohammad Sadrosadati, Pejman Lotfi-Kamran, and Hamid Sarbazi-Azad. 2021. Data-aware compression of neural networks. IEEE Comput. Arch. Lett. 20, 2 (2021), 94–97.
[36]
Mark Gebhart, Daniel R. Johnson, David Tarjan, Stephen W. Keckler, William J. Dally, Erik Lindholm, and Kevin Skadron. 2011. Energy-efficient mechanisms for managing thread context in throughput processors. In ISCA.
[37]
Syed Zohaib Gilani, Nam Sung Kim, and Michael J. Schulte. 2013. Exploiting GPU peak-power and performance tradeoffs through reduced effective pipeline latency. In MICRO.
[38]
Gilani, Syed Zohaib and Kim, Nam Sung and Schulte, Michael J. 2013. Power-efficient computing for compute-intensive GPGPU applications. In HPCA.
[39]
Juan Gómez-Luna, Li-Wen Chang, I-Jui Sung, Wen-Mei Hwu, and Nicolás Guil. 2015. In-place data sliding algorithms for many-core architectures. In ICPP.
[40]
Nilanjan Goswami, Bingyi Cao, and Tao Li. 2013. Power-performance co-optimization of throughput core architecture using resistive memory. In HPCA.
[41]
S. Grauer-Gray, L. Xu, R. Searles, S. Ayalasomayajula, and J. Cavazos. 2012. Auto-tuning a high-level language targeted to GPU codes. In InPar.
[42]
Bingsheng He, Wenbin Fang, Qiong Luo, Naga K. Govindaraju, and Tuyong Wang. 2008. Mars: A MapReduce framework on graphics processors. In PACT.
[43]
Mohamed Assem Ibrahim, Hongyuan Liu, Onur Kayiran, and Adwait Jog. 2019. Analyzing and leveraging remote-core bandwidth for enhanced performance in GPUs. In PACT.
[44]
Paul Indrani, Huang Wei, Manish Arora, and Sudhakar Yalmanchili. 2015. Harmonia: Balancing compute and memory power in high-performance GPUs. In ISCA.
[45]
Hyeran Jeon and M. Annavaram. 2012. Warped-DMR: Light-weight error detection for GPGPU. In MICRO.
[46]
Naifeng Jing, Jianfei Wang, Fengfeng Fan, Wenkang Yu, Li Jiang, Chao Li, and Xiaoyao Liang. 2016. Cache-emulated register file: An integrated on-chip memory architecture for high performance GPGPUs. In MICRO.
[47]
Adwait Jog, Onur Kayiran, Nachiappan Chidambaram, Asit K.Mishra, Mahmut T.Kandemir, Onur Mutlu, Ravishankar Iyer, and Chita R.Das. 2013. OWL: Cooperative thread array aware scheduling techniques for improving GPGPU performance. In ASPLOS.
[48]
Adwait Jog, Onur Kayiran, Asit K. Mishra, Mahmut T. Kandemir, Onur Mutlu, Ravishankar Iyer, and Chita R. Das. 2013. Orchestrated scheduling and prefetching for GPGPUs. In ISCA.
[49]
Adwait Jog, Onur Kayiran, Ashutosh Pattnaik, Mahmut T. Kandemir, Onur Mutlu, Ravishankar Iyer, and Chita R. Das. 2016. Exploiting core criticality for enhanced GPU performance. In SIGMETRICS.
[50]
Vijay Kandiah, Scott Peverelle, Mahmoud Khairy, Junrui Pan, Amogh Manjunath, Timothy G. Rogers, Tor M. Aamodt, and Nikos Hardavellas. 2021. AccelWattch: A power modeling framework for modern GPUs. In MICRO.
[51]
Onur Kayiran, Await Jog, Mahmut T. Kandemir, and Chita R. Das. 2013. Neither more nor less: Optimizing thread-level parallelism for GPGPUs. In PACT.
[52]
Onur Kayiran, Nachiappan Chidambaram Nachiappan, Adwait Jog, Rachata Ausavarungnirun, Mahmut T. Kandemir, Gabriel H. Loh, Onur Mutlu, and Chita R. Das. 2014. Managing GPU concurrency in heterogeneous architectures. In MICRO.
[53]
M. M. Keshtegar, H. Falahati, and S. Hessabi. 2015. Cluster-based approach for improving graphics processing unit performance by inter streaming multiprocessors locality. IET Computers & Digital Techniques 9, 5 (2015), 275–282.
[54]
Khairy, Mahmoud and Shen, Zhesheng and Aamodt, Tor M and Rogers, Timothy G. 2020. Accel-sim: An extensible simulation framework for validated GPU modeling. In ISCA.
[55]
Gunjae Koo, Hyeran Jeon, and Murali Annavaram. 2015. Revealing critical loads and hidden data locality in GPGPU applications. In IISWC.
[56]
Nagesh B. Lakshminarayana and Hyesoon Kim. 2014. Spare register aware prefetching for graph algorithms on GPUs. In HPCA.
[57]
Jaekyu Lee and Hyesoon Kim. 2012. TAP: A TLP-aware cache management policy for a CPU-GPU heterogeneous architecture. In HPCA.
[58]
Jaekyu Lee, Nagesh B. Lakshminarayana, Hyesoon Kim, and Richard Vuduc. 2010. Many-thread aware prefetching mechanisms for GPGPU applications. In MICRO.
[59]
Minseok Lee, Seokwoo Song, Joosik Moon, John Kim, Woong Seo, Yeongon Cho, and Soojung Ryu. 2014. Improving GPGPU resource utilization through alternative thread block scheduling. In HPCA.
[60]
Jingwen Leng, Tayler Hetherington, Ahmed ElTantawy, Syed Gilani, Nam Sung Kim, Tor M. Aamodt, and Vijay Janapa Reddi. 2013. GPUWattch: Enabling energy optimizations in GPGPUs. In ISCA.
[61]
Ang Li, Gert-Jan van den Braak, Akash Kumar, and Henk Corporaal. 2015. Adaptive and transparent cache bypassing for GPUs. In SC.
[62]
Chao Li, Shuaiwen Leon Song, Hongwen Dai, Albert Sidelnik, Siva Kumar Sastry Hari, and Huiyang Zhou. 2015. Locality-driven dynamic GPU cache bypassing. In ICS.
[63]
Dong Li, Minsoo Rhu, Daniel R. Johnson, Mike O’Connor, Mattan Erez, Doug Burger, Donald S. Fussell, and Stephen W. Keckler. 2015. Priority-based cache allocation in throughput processors. In HPCA.
[64]
Zhenhong Liu, Syed Gilani, Murali Annavaram, and Nam Sung Kim. 2017. G-scalar: Cost-effective generalized scalar execution architecture for power-efficient GPUs. In HPCA.
[65]
Tor M.Aamodt, Wilson W. L. Fung, and Tayler H. Hetherington. 2018. Cuda9.0 Programming Guide. Retrieved from http://gpgpu-sim.org/manual/index.php5/GPGPU-Sim_3.x_Manual
[66]
Megalingam R.Kannan and M. Arunkumar and V. A.Ashok and Krishnan Nived and C. J. Daniel. 2010. Power-efficient cache design using dual-edge clocking scheme in sun OpenSPARC T1 and Alpha AXP processors. J. Commun. Comput. Inf. Sci. (2010).
[67]
Amirhossein Mirhosseini, Mohammad Sadrosadati, Behnaz Soltani, Hamid Sarbazi-Azad, and Thomas F. Wenisch. 2017. BiNoCHS: Bimodal network-on-chip for CPU-GPU heterogeneous systems. In NOCS.
[68]
Mirhosseini, Amirhossein and Sadrosadati, Mohammad and Aghamohammadi, Fatemeh and Modarressi, Mehdi and Sarbazi-Azad, Hamid. 2019. BARAN: Bimodal adaptive reconfigurable-allocator network-on-chip. ACM Trans. Parallel Comput. (2019).
[69]
Sparsh Mittal. 2016. A survey of cache bypassing techniques. J. Low Power Electr. Appl. (2016).
[70]
Saba Mostofi, Hajar Falahati, Negin Mahani, Pejman Lotfi-Kamran, and Hamid Sarbazi-Azad. 2023. Snake: A variable-length chain-based prefetching for GPUs. In MICRO. 728–741.
[71]
Aaftab Munshi. 2008. The OpenCL specification. In Khronos OpenCL Working Group.
[72]
Hoda Naghibijouybari, Khaled N. Khasawneh, and Nael Abu-Ghazaleh. 2017. Constructing and characterizing covert channels on GPGPUs. In MICRO.
[73]
Veynu Narasiman, Michael Shebanow, Chang Joo Lee, Rustam Miftakhutdinov, Onur Mutlu, and Yale N.Patt. 2011. Improving GPU performance via large warps and two-level warp scheduling. In MICRO.
[74]
Negin Nematollahi, Mohammad Sadrosadati, Hajar Falahati, Marzieh Barkhordar, and Hamid Sarbazi-Azad. 2018. Neda: Supporting direct inter-core neighbor data exchange in GPUs. IEEE Comput. Arch. Lett. 17, 2 (2018), 225–229.
[75]
Nematollahi, Negin and Sadrosadati, Mohammad and Falahati, Hajar and Barkhordar, Marzieh and Drumond, Mario Paulo and Sarbazi-Azad, Hamid and Falsafi, Babak. 2020. Efficient nearest-neighbor data sharing in GPUs. ACM Trans. Arch. Code Optim. 18, 1 (2020), 1–26.
[76]
[77]
Chang Hyun Park, Taekyung Heo, and Jaehyuk Huh. 2016. Efficient intra-SM slicing through dynamic resource partitioning for gpu multiprogramming. In ISCA.
[78]
Seung In Park, Sean P. Ponce, Jing Huang, Yong Cao, and Francis Quek. 2008. Low-cost, high-speed computer vision using NVIDIA’s CUDA architecture. In AIPR.
[79]
Massoud Pedram, Qing Wu, and Xunwei Wu. 1998. A new design for double edge triggered flip-flops. In ASP-AC.
[80]
Gennady Pekhimenko, Evgeny Bolotin, Mike O’Connor, Onur Mutlu, Todd C. Mowry, and Stephen W. Keckler. 2015. Toggle-aware compression for GPUs. IEEE Comput. Arch. Lett. (2015).
[81]
Guillem Pratx and Lei Xing. 2011. GPU computing in medical physics: A review. Med. Phys. (2011).
[82]
Timothy G. Rogers, Mike O’Connor, and Tor M. Aamodt. 2012. Cache-conscious wavefront scheduling. In MICRO.
[83]
Mohammad Sadrosadati, Seyed Borna Ehsani, Hajar Falahati, Rachata Ausavarungnirun, Arash Tavakkol, Mojtaba Abaee, Lois Orosa, Yaohua Wang, Hamid Sarbazi-Azad, and Onur Mutlu. 2019. ITAP: Idle-time-aware power management for GPU execution units. ACM Trans. Arch. Code Optim. 16, 1 (2019), 1–26.
[84]
Mohammad Sadrosadati, Amirhossein Mirhosseini, Seyed Borna Ehsani, Hamid Sarbazi-Azad, Mario Drumond, Babak Falsafi, Rachata Ausavarungnirun, and Onur Mutlu. 2018. LTRF: Enabling high-capacity register files for gpus via hardware/software cooperative register prefetching. In ASPLOS.
[85]
Mohammad Sadrosadati, Amirhossein Mirhosseini, Ali Hajiabadi, Seyed Borna Ehsani, Hajar Falahati, Hamid Sarbazi-Azad, Mario Drumond, Babak Falsafi, Rachata Ausavarungnirun, and Onur Mutlu. 2021. Highly concurrent latency-tolerant register files for GPUs. ACM Trans. Comput. Syst. 37, 1-4 (2021), 1–36.
[86]
Sadrosadati, Mohammad and Mirhosseini, Amirhossein and Roozkhosh, Shahin and Bakhishi, Hazhir and Sarbazi-Azad, Hamid. 2017. Effective cache bank placement for GPUs. In DATE.
[87]
Mohammad Hossein Samavatian, Hamed Abbasitabar, Mohammad Arjomand, and Hamid Sarbazi-Azad. 2014. An efficient STT-RAM last level cache architecture for GPUs. In DAC.
[88]
I. Schmerken. 2009. Wall street accelerates options analysis with GPU technology. Wall Street Technol. (2009).
[89]
Ankit Seething, Ganesh Dasika, Mehrzad Samadi, and Scott Mahlke. 2010. Apogee: Adaptive prefetching on GPUs for energy efficiency. In PACT.
[90]
Ankit Sethia, D. Anoushe Jamshidi, and Scott Mahlke. 2015. Mascar: Speeding up GPU warps by reducing memory pitstops. In HPCA.
[91]
Ankit Sethia and Scott Mahlke. 2014. Equalizer: Dynamic tuning of GPU resources for efficient execution. In MICRO.
[92]
Kevin Skadron, Margaret Martonosi, and Douglas W. Clark. 2009. A taxonomy of branch mispredictions, and alloyed prediction as a robust solution to wrong-history mispredictions. In PACT.
[93]
Sam S. Stone, Justin P. Haldar, Stephanie C. Tsao, B. P. Sutton, Z.-P. Liang, et al. 2008. Accelerating advanced MRI reconstructions on GPUs. In Proceedings of the 5th Conference on Computing Frontiers. 261–272.
[94]
John A. Stratton, Christopher Rodrigues, I-Jui Sung, Nady Obeid, Li-Wen Chang, Nasser Anssari, Geng Daniek Liu, and Wen Mei Hwu. 2012. Parboil: A Revised Benchmark Suite for Scientific and Commercial throughput Computing. Technical Report.
[95]
Chen Sun, Chia-Hsin Owen Chen, George Kurian, Lan Wei, Jason Miller, Anant Agarwal, Li-Shiuan Peh, and Vladimir Stojanovic. 2012. DSENT-a tool connecting emerging photonics with electronics for opto-electronic networks-on-chip modeling. In NOCS.
[97]
Abdulaziz Tabbakh, Murali Annavaram, and Xuehai Qian. 2017. Power efficient sharing-aware GPU data management. In IPDPS.
[98]
David Tarjan and Kevin Skadron. 2010. The sharing tracker: Using ideas from cache coherence hardware to reduce off-chip memory traffic with non-coherent caches. In SC.
[99]
Yingying Tian, Sooraj Puthoor, Joseph L. Greathouse, Bradford M. Beckmann, and Daniel A. Jiménez. 2015. Adaptive GPU cache bypassing. In GPGPU.
[100]
Timothy G. Rogers and Mike O’Connor and Tor M. Aamodt. 2013. Divergence-aware warp scheduling. In MICRO.
[101]
Nandita Vijaykumar, Gennady Pekhimenko, Adwait Jog, Abhishek Bhowmick, Rachata Ausavarungnirun, Chita Das, Mahmut Kandemir, Todd C. Mowry, and Onur Mutlu. 2015. A case for core-assisted bottleneck acceleration in GPUs: Enabling flexible data compression with assist warps. In ACM SIGARCH Computer Architecture News.
[102]
Bin Wang, Bo Wu, Dong Li, Xipeng Shen, Weikuan Yu, Yizheng Jiao, and Jeffrey S. Vetter. 2013. Exploring hybrid memory for GPU energy efficiency through software-hardware co-design. In PACT.
[103]
Jin Wang, Norm Rubin, Albert Sidelnik, and Sudhakar Yalamanchili. 2015. Dynamic thread block launch: A lightweight execution mechanism to support irregular applications on GPUs. In ISCA.
[104]
Wang, Jin and Rubin, Norm and Sidelnik, Albert and Yalamanchili, Sudhakar. 2016. Laperm: Locality aware scheduler for dynamic parallelism on gpus. In ISCA.
[105]
Wang, Lu and Zhao, Xia and Kaeli, David and Wang, Zhiying and Eeckhout, Lieven. 2018. Intra-cluster coalescing to reduce gpu noc pressure. In IPDPS.
[106]
Steven J. E. Wilton and Norman P. Jouppi. 1996. CACTI: An enhanced cache access and cycle time model. IEEE J. Solid-State Circ. 31, 5 (1996), 677–688.
[107]
S Yu Wing-Kei, Ruirui Huang, Sarah Q. Xu, Sung-En Wang, Edwin Kan, and G Edward Suh. 2011. SRAM-DRAM hybrid memory with applications to efficient register files in fine-grained multi-threading. In ISCA.
[108]
Xiaolong Xie, Yun Liang, Guangyu Sun, and Deming Chen. 2013. An efficient compiler framework for cache bypassing on GPUs. In ICCAD.
[109]
Xiaolong Xie, Yun Liang, Yu Wang, Guangyu Sun, and Tao Wang. 2015. Coordinated static and dynamic cache bypassing for GPUs. In HPCA.
[110]
Qiumin Xu and Murali Annavaram. 2014. PATS: Pattern aware scheduling and power gating for GPGPUs. In PACT.
[111]
Qiumin Xu, Hyeran Jeon, and Murali Annavaram. 2014. Graph processing on GPUs: Where are the bottlenecks? In IISWC.
[112]
Xu, Qiumin and Jeon, Hyeran and Annavaram, Murali. 2014. Graph processing on GPUs: Where are the bottlenecks? In IISWC.
[113]
Jieming Yin, Pingqiang Zhou, Anup Holey, Sachin S. Sapatnekar, and Antonia Zhai. 2012. Energy-efficient non-minimal path on-chip interconnection network for heterogeneous systems. In ISLPED.
[114]
Jia Zhan, Onur Kayıran, Gabriel H. Loh, Chita R. Das, and Yuan Xie. 2016. OSCAR: Orchestrating STT-RAM cache traffic for heterogeneous CPU-GPU architectures. In MICRO.
[115]
Xia Zhao, Yuxi Liu, Almutaz Adileh, and Lieven Eeckhout. 2016. LA-LLC: Inter-core locality-aware last-level cache to exploit many-to-many traffic in GPGPUs. IEEE Comput. Arch. Lett. (2016).
[116]
Zhao, Xia and Ma, Sheng and Li, Chen and Eeckhout, Lieven and Wang, Zhiying. 2016. A heterogeneous low-cost and low-latency ring-chain network for GPGPUs. In ICCD.
[117]
Ziabari, Amir Kavyan and Abellán, José L and Ma, Yenai and Joshi, Ajay and Kaeli, David. 2015. Asymmetric NoC architectures for GPU systems. In NOCS.

Cited By

View all
  • (2024)Transformers: A Security PerspectiveIEEE Access10.1109/ACCESS.2024.350937212(181071-181105)Online publication date: 2024

Index Terms

  1. Cross-core Data Sharing for Energy-efficient GPUs

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Architecture and Code Optimization
    ACM Transactions on Architecture and Code Optimization  Volume 21, Issue 3
    September 2024
    592 pages
    EISSN:1544-3973
    DOI:10.1145/3613629
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 September 2024
    Online AM: 18 March 2024
    Accepted: 05 March 2023
    Revised: 24 January 2023
    Received: 30 June 2022
    Published in TACO Volume 21, Issue 3

    Check for updates

    Author Tags

    1. GPU
    2. memory access
    3. energy efficiency
    4. data sharing
    5. inter-SM communication

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)889
    • Downloads (Last 6 weeks)247
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Transformers: A Security PerspectiveIEEE Access10.1109/ACCESS.2024.350937212(181071-181105)Online publication date: 2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media