Implementing cooperative prefetching and caching in a globally-managed memory system

GM Voelker, EJ Anderson, T Kimbrel… - Proceedings of the …, 1998 - dl.acm.org
GM Voelker, EJ Anderson, T Kimbrel, MJ Feeley, JS Chase, AR Karlin, HM Levy
Proceedings of the 1998 ACM SIGMETRICS joint international conference on …, 1998dl.acm.org
This paper presents cooperative prefetching and caching---the use of network-wide global
resources (memories, CPUs, and disks) to support prefetching and caching in the presence
of hints of future demands. Cooperative prefetching and caching effectively unites disk-
latency reduction techniques from three lines of research: prefetching algorithms, cluster-
wide memory management, and parallel I/O. When used together, these techniques greatly
increase the power of prefetching relative to a conventional (non-global-memory) system …
This paper presents cooperative prefetching and caching --- the use of network-wide global resources (memories, CPUs, and disks) to support prefetching and caching in the presence of hints of future demands. Cooperative prefetching and caching effectively unites disk-latency reduction techniques from three lines of research: prefetching algorithms, cluster-wide memory management, and parallel I/O. When used together, these techniques greatly increase the power of prefetching relative to a conventional (non-global-memory) system. We have designed and implemented PGMS, a cooperative prefetching and caching system, under the Digital Unix operating system running on a 1.28 Gb/sec Myrinet-connected cluster of DEC Alpha workstations. Our measurements and analysis show that by using available global resources, cooperative prefetching can obtain significant speedups for I/O-bound programs. For example, for a graphics rendering application, our system achieves a speedup of 4.9 over a non-prefetching version of the same program, and a 3.1-fold improvement over that program using local-disk prefetching alone.
ACM Digital Library