[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3489517.3530662acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

LPCA: learned MRC profiling based cache allocation for file storage systems

Published: 23 August 2022 Publication History

Abstract

File storage system (FSS) uses multi-caches to accelerate data accesses. Unfortunately, efficient FSS cache allocation remains extremely difficult. First, as the key of cache allocation, existing miss ratio curve (MRC) constructions are limited to LRU. Second, existing techniques are suitable for same-layer caches but not for hierarchical ones.
We present a Learned MRC Profiling based Cache Allocation (LPCA) scheme for FSS. To the best of our knowledge, LPCA is the first to apply machine learning to model MRC under non-LRU, LPCA also explores optimization target for hierarchical caches, in that LPCA can provide universal and efficient cache allocation for FSSs.

References

[1]
George Almási et al. 2002. Calculating stack distances efficiently. In Proceedings of the 2002 workshop on Memory system performance. 37--43.
[2]
Asaf Cidon et al. 2017. Memshare: a Dynamic Multi-tenant Key-value Cache. In ATC 17. USENIX Association, Santa Clara, CA, 321--334.
[3]
Nosayba El-Sayed et al. 2018. KPart: A Hybrid Cache Partitioning-Sharing Technique for Commodity Multicores. In 2018 HPCA. 104--117.
[4]
Ajay Gulati et al. 2012. Demand Based Hierarchical QoS Using Storage Resource Pools. In ATC 12. USENIX Association, Boston, MA, 1--13.
[5]
Xiameng Hu et al. 2016. Kinetic Modeling of Data Eviction in Cache. In ATC 16. USENIX Association, Denver, CO, 351--364.
[6]
Theodore Johnson et al. 1994. 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm. In VLDB. 439--450.
[7]
Zaoxing Liu et al. 2019. DistCache: Provable Load Balancing for Large-Scale Storage Systems with Distributed Caching. In FAST 19. USENIX Association, Boston, MA, 143--157.
[8]
R.L. Mattson et al. 1970. Evaluation techniques for storage hierarchies. IBM Systems Journal 9, 2 (1970), 78--117.
[9]
Nimrod Megiddo et al. 2003. ARC: A Self-Tuning, Low Overhead Replacement Cache. In FAST 03. USENIX Association, San Francisco, CA.
[10]
Sparsh Mittal. 2017. A survey of techniques for cache partitioning in multicore processors. ACM Computing Surveys (CSUR) 50, 2 (2017), 1--39.
[11]
Qingpeng Niu et al. 2012. PARDA: A Fast Parallel Reuse Distance Analysis Algorithm. In IPDPS. 1284--1294.
[12]
Stuart J. Russell et al. 2003. Artificial intelligence - a modern approach, 2nd Edition. In Prentice Hall series in artificial intelligence.
[13]
SNIA. [n. d.]. SNIA iotta repository block I/O traces. http://iotta.snia.org/tracetypes/3.
[14]
JianTanet al. 2018. On resource pooling and separation for LRU caching. POMACS 2, 1 (2018), 1--31.
[15]
Carl Waldspurger et al. 2017. Cache Modeling and Optimization using Miniature Simulations. In ATC 17. USENIX Association, Santa Clara, CA, 487--498.
[16]
Carl A. Waldspurger et al. 2015. Efficient MRC Construction with SHARDS. In FAST 15. USENIX Association, Santa Clara, CA, 95--110.
[17]
Jake Wires et al. 2014. Characterizing Storage Workloads with Counter Stacks. In OSDI 14. USENIX Association, Broomfield, CO, 335--349.
[18]
Juncheng Yang et al. 2020. A large scale analysis of hundreds of in-memory cache clusters at Twitter. In OSDI 20. USENIX Association, 191--208.
[19]
Y. Yasuda et al. 2003. Concept and evaluation of X-NAS: a highly scalable NAS system. In MSST 2003. 219--227.
[20]
Yu Zhang et al. 2020. OSCA: An Online-Model Based Cache Allocation Scheme in Cloud Block Storage Systems. In ATC 20. USENIX Association, 785--798.

Cited By

View all
  • (2024)FLOWS: Balanced MRC Profiling for Heterogeneous Object-Size CacheProceedings of the Nineteenth European Conference on Computer Systems10.1145/3627703.3650078(421-440)Online publication date: 22-Apr-2024
  • (2024)-LAP: A Lightweight and Adaptive Cache Partitioning Scheme With Prudent Resizing Decisions for Content Delivery NetworksIEEE Transactions on Cloud Computing10.1109/TCC.2024.342045412:3(942-953)Online publication date: Jul-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
July 2022
1462 pages
ISBN:9781450391429
DOI:10.1145/3489517
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. cache allocation
  2. machine learning
  3. miss ratio curve
  4. neural network

Qualifiers

  • Research-article

Funding Sources

Conference

DAC '22
Sponsor:
DAC '22: 59th ACM/IEEE Design Automation Conference
July 10 - 14, 2022
California, San Francisco

Acceptance Rates

Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

Upcoming Conference

DAC '25
62nd ACM/IEEE Design Automation Conference
June 22 - 26, 2025
San Francisco , CA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)74
  • Downloads (Last 6 weeks)4
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)FLOWS: Balanced MRC Profiling for Heterogeneous Object-Size CacheProceedings of the Nineteenth European Conference on Computer Systems10.1145/3627703.3650078(421-440)Online publication date: 22-Apr-2024
  • (2024)-LAP: A Lightweight and Adaptive Cache Partitioning Scheme With Prudent Resizing Decisions for Content Delivery NetworksIEEE Transactions on Cloud Computing10.1109/TCC.2024.342045412:3(942-953)Online publication date: Jul-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media