This thesis describes the effect of write caching on overall file system performance. It will show through simulations that extensive write caching greatly reduces average file read latency. Extensive write caching reduces the number of disk writes and minimizes disk read/write contention. By taking a closer look at file system write semantics, it will also show that write optimized file systems are not the key issue for Unix like file systems. Write optimized file systems only reduce disk read/write contention without really solving the cause of disk contention. Simulations using the Sprite traces are used to guide the design of a client and server caching protocol for the Pegasus File Server (PFS). This protocol guarantees data persistency, without writing the data to disk, through replication.
Cited By
Recommendations
ETD-Cache: an expiration-time driven cache scheme to make SSD-based read cache endurable and cost-efficient
CF '15: Proceedings of the 12th ACM International Conference on Computing FrontiersRecently flash-based solid-state drives (SSDs) have been widely deployed as cache devices to boost system performance. However, classical SSD cache algorithms (e.g. LRU) replace the cached data frequently to maintain high hit rates. Such aggressive data ...
Location cache: a low-power L2 cache system
ISLPED '04: Proceedings of the 2004 international symposium on Low power electronics and designWhile set-associative caches incur fewer misses than direct-mapped caches, they typically have slower hit times and higher power consumption, when multiple tag and data banks are probed in parallel. This paper presents the location cache structure which ...
RAPID-Cache-A Reliable and Inexpensive Write Cache for High Performance Storage Systems
Modern high performance disk systems make extensive use of nonvolatile RAM (NVRAM) write caches. A single-copy NVRAM cache creates a single point of failure while a dual-copy NVRAM cache is very expensive because of the high cost of NVRAM. This paper ...