[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2522548.2522605acmotherconferencesArticle/Chapter ViewAbstractPublication PagescomputeConference Proceedingsconference-collections
research-article

LibRe: a consistency protocol for modern storage systems

Published: 22 August 2013 Publication History

Abstract

The dramatic increase of data stored, processed and reused in all sectors leads to the evolution of modern storage systems known as NoSQL databases such as Amazon Dynamo, Cassandra, Big-Table, PNUTS, HBase. In order to boost the availability and performance of the system, these storage systems follow eventual consistency and don't offer tight consistency by default. Paxos is commonly used in this context to ensure tight consistency on demand. But it adds extra costs on messages management, mostly complexity and size. In addition, Paxos shrinks the space for other research areas such as cache memory optimization and load-balancing.
This paper gives an opportunity to propose and discuss the challenges of a new consistency protocol for modern storage systems entitled 'LibRe'. LibRe follows the Eventual Consistency model. In addition, it logs operations executed on each node in the distributed system. This additional information is used by the load balancer and ensures that requests are not forwarded to a node where the data needed to serve the request are stale. Since Eventual Consistency already offers better Availability and Partition tolerance, the aspiration of associating LibRe with eventual consistency is to work out a better consistency management service providing also availability and partition tolerance. The simulation results for consistency and latency in LibRe are compared among traditional Pessimistic Consistency, Eventual Consistency, and Paxos. The overall results are discussed and new opportunities for research works are provided.

References

[1]
D. Abadi. Problems with cap, and yahoo's little known nosql system. http://dbmsmusings.blogspot.fr/2010/04/problems-with-cap-and-yahoos-little.html, April 2010.
[2]
D. J. Abadi. Consistency tradeoffs in modern distributed database system design: Cap is only part of the story. Computer, 45(2):37--42, 2012.
[3]
D. A. Agarwal, L. Moser, P. Melliar-Smith, and R. K. Budhia. The totem multiple-ring ordering and topology maintenance protocol. ACM Transactions on Computer Systems, 16:93--132, 1998.
[4]
G. Belalem and Y. Slimani. Hybrid approach for consistency management in optorsim simulator.
[5]
B. Calder, J. Wang, A. Ogus, N. Nilakantan, A. Skjolsvold, S. McKelvie, Y. Xu, S. Srivastav, J. Wu, H. Simitci, J. Haridas, C. Uddaraju, H. Khatri, A. Edwards, V. Bedekar, S. Mainali, R. Abbasi, A. Agarwal, M. F. u. Haq, M. I. u. Haq, D. Bhardwaj, S. Dayanand, A. Adusumilli, M. McNett, S. Sankaran, K. Manivannan, and L. Rigas. Windows azure storage: a highly available cloud storage service with strong consistency. In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles, SOSP '11, pages 143--157, New York, NY, USA, 2011. ACM.
[6]
D. Carstoiu, A. Cernian, and A. Olteanu. Hadoop hbase-0.20.2 performance evaluation. In New Trends in Information Science and Service Science (NISS), 2010 4th International Conference on, pages 84--87, 2010.
[7]
I. I. Center. Big data 101: Unstructured data analytics. http://www.intel.in/content/dam/www/public/us/en/documents/solution-briefs/bigdata-101-brief.pdf, June 2012.
[8]
T. D. Chandra, R. Griesemer, and J. Redstone. Paxos made live: an engineering perspective. In Proceedings of the twenty-sixth annual ACM symposium on Principles of distributed computing, PODC '07, pages 398--407, New York, NY, USA, 2007. ACM.
[9]
T. D. Chandra and S. Toueg. Unreliable failure detectors for reliable distributed systems. J. ACM, 43(2):225--267, Mar. 1996.
[10]
F. Chang, J. Dean, S. Ghemawat, W. C. Hsieh, D. A. Wallach, M. Burrows, T. Chandra, A. Fikes, and R. E. Gruber. Bigtable: a distributed storage system for structured data. In Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation - Volume 7, OSDI '06, pages 15--15, Berkeley, CA, USA, 2006. USENIX Association.
[11]
H.-E. Chihoub, S. Ibrahim, G. Antoniu, and M. S. Perez. Harmony: Towards automated self-adaptive consistency in cloud storage. In Proceedings of the 2012 IEEE International Conference on Cluster Computing, CLUSTER '12, pages 293--301, Washington, DC, USA, 2012. IEEE Computer Society.
[12]
Cisco. Monitoring load-balancing services. http://www.cisco.com/en/US/docs/security/securitymanagement/cisco_security_manager/performance_monitor/3.3/user/guide/load_bal.html, May 2013.
[13]
B. F. Cooper, R. Ramakrishnan, U. Srivastava, A. Silberstein, P. Bohannon, H.-A. Jacobsen, N. Puz, D. Weaver, and R. Yerneni. Pnuts: Yahoo!'s hosted data serving platform. Proc. VLDB Endow., 1(2):1277--1288, Aug. 2008.
[14]
B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears. Benchmarking cloud serving systems with ycsb. In Proceedings of the 1st ACM symposium on Cloud computing, SoCC '10, pages 143--154, New York, NY, USA, 2010. ACM.
[15]
G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S. Sivasubramanian, P. Vosshall, and W. Vogels. Dynamo: amazon's highly available key-value store. SIGOPS Oper. Syst. Rev., 41(6):205--220, Oct. 2007.
[16]
J. Gantz and D. Reinsel. Extracting value from chaos. Technical report, International Data Corporation (IDC), June 2011.
[17]
S. Gilbert and N. Lynch. Brewer's conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News, 33(2):51--59, June 2002.
[18]
L. Glendenning, I. Beschastnikh, A. Krishnamurthy, and T. Anderson. Scalable consistency in scatter. In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles, SOSP '11, pages 15--28, New York, NY, USA, 2011. ACM.
[19]
C. Hale. You can't sacrifice partition tolerance. http://codahale.com/you-cant-sacrifice-partition-tolerance/, October 2010.
[20]
J. Hugg. Voltdb's architecture. http://highscalability.com/blog/2010/6/28/voltdb-decapitates-six-sql-urban-myths-and-delivers-internet.html, June 2010.
[21]
IBM. Big data at speed of business. http://www-01.ibm.com/software/data/bigdata/.
[22]
F. Junqueira, B. Reed, and M. Serafini. Zab: High-performance broadcast for primary-backup systems. In Dependable Systems Networks (DSN), 2011 IEEE/IFIP 41st International Conference on, pages 245--256, 2011.
[23]
E. Kern. Facebook is collecting your data - 500 terabytes a day. http://gigaom.com/2012/08/22/facebook-is-collecting-your-data-500-terabytes-a-day/, August 2012.
[24]
A. Lakshman and P. Malik. Cassandra: a decentralized structured storage system. SIGOPS Oper. Syst. Rev., 44(2):35--40, Apr. 2010.
[25]
D. Lancy. Application delivery strategies. http://blogs.gartner.com/doug-laney/files/2012/01/ad949-3D-Data-Management-Controlling-Data-Volume-Velocity-and-Variety.pdf, February 2001.
[26]
D. Malkhi, M. Reiter, A. Wool, and R. N. Wright. Probabilistic byzantine quorum systems, 1998.
[27]
D. Mosberger. Memory consistency models. SIGOPS Oper. Syst. Rev., 27(1):18--26, Jan. 1993.
[28]
A. Murdopo. Consistency tradeoff in modern distributed db. http://www.otnira.com/2012/04/21/consistency-tradeoff-in-modern-distributed-db/, June 2013.
[29]
Y. Pessach. Distributed Storage: Concepts, Algorithms, and Implementations, volume OL25423189M. Amazon, 2013.
[30]
D. Pritchett. Base: An acid alternative. Queue, 6(3):48--55, May 2008.
[31]
D. A. N. Pritchett. Base: An acid alternative. http://queue.acm.org/detail.cfm?id=1394128, June 2008.
[32]
G. Ramakrishnan. Database Management Systems. McGraw-Hill, third edition, 2003.
[33]
R. V. Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels. In In Proceedings of the USENIX Workshop on Micro-Kernels and Other Architectures, pages 269--283, 1992.
[34]
H. Robinson. Cap confusion: Problems with âĂŸpartition toleranceâĂŹ. http://blog.cloudera.com/blog/2010/04/cap-confusion-problems-with-partition-tolerance/, April 2010.
[35]
Y. Saito and M. Shapiro. Optimistic replication. ACM Comput. Surv., 37(1):42--81, Mar. 2005.
[36]
N. Shalom. Nocap âĂŞ part ii availability and partition tolerance. http://blog.gigaspaces.com/nocap-part-ii-availability-and-partition-tolerance-2/, July 2013.
[37]
R. Specialties. Data consistency explained. http://recoveryspecialties.com/dc01.html, July 2013.
[38]
M. Stonebraker. Errors in database systems, eventual consistency, and the cap theorem. http://cacm.acm.org/blogs/blog-cacm/83396-errors-in-database-systems-eventual-consistency-and-the-cap-theorem/fulltext, April 2010.
[39]
R. C. Sylvain Lefebvre, Sathiya Prabhu.K. Simizer: A cloud simulation tool. https://forge.isep.fr/projects/simizer/, March 2013.
[40]
W. Vogels. Eventually consistent. http://queue.acm.org/detail.cfm?id=1466448, October 2008.
[41]
W. Vogels. Eventually consistent. Commun. ACM, 52(1):40--44, Jan. 2009.
[42]
W. Vogels, D. Dumitriu, A. Agrawal, T. Chia, K. Guo, W. Vogels, D. Dumitriu, A. Agrawal, T. Chia, and K. Guo. Scalability of the microsoft cluster service. In In Proceedings of the 2nd USENIX Windows NT Symposium, 1998.
[43]
P. Voldemort. Physical architecture options. http://www.project-voldemort.com/voldemort/design.html, June 2013.
[44]
T. White. Hadoop: The Definitive Guide. O'Reilly Media, Inc., 1st edition, 2009.
[45]
H. Wiki. Zookeeper 3.2 performance. http://wiki.apache.org/hadoop/ZooKeeper/Performance, June 2013.
[46]
A. Zaslavsky, M. Faiz, B. Srinivasan, A. Rasheed, and S. Lai. Primary copy method and its modifications for database replication in distributed mobile computing environment. In Reliable Distributed Systems, 1996. Proceedings., 15th Symposium on, pages 178--187, 1996.

Cited By

View all
  • (2017)Consistency-Latency Trade-Off of the LibRe Protocol: A Detailed StudyAdvances in Knowledge Discovery and Management10.1007/978-3-319-65406-5_4(83-108)Online publication date: 11-Oct-2017
  • (2016)Efficient Replica Consistency Model (ERCM) for update propagation in Data Grid Environment2016 International Conference on Information Communication and Embedded Systems (ICICES)10.1109/ICICES.2016.7518894(1-7)Online publication date: Feb-2016
  • (2015)CaLibRe: A Better Consistency-Latency Tradeoff for Quorum Based Replication SystemsDatabase and Expert Systems Applications10.1007/978-3-319-22852-5_40(491-503)Online publication date: 11-Aug-2015

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
Compute '13: Proceedings of the 6th ACM India Computing Convention
August 2013
196 pages
ISBN:9781450325455
DOI:10.1145/2522548
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 August 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. CAP
  2. consistency
  3. eventual consistency
  4. storage systems

Qualifiers

  • Research-article

Conference

Compute '13
Compute '13: The 6th ACM India Computing Convention
August 22 - 25, 2013
Tamil Nadu, Vellore, India

Acceptance Rates

Compute '13 Paper Acceptance Rate 24 of 96 submissions, 25%;
Overall Acceptance Rate 114 of 622 submissions, 18%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 30 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2017)Consistency-Latency Trade-Off of the LibRe Protocol: A Detailed StudyAdvances in Knowledge Discovery and Management10.1007/978-3-319-65406-5_4(83-108)Online publication date: 11-Oct-2017
  • (2016)Efficient Replica Consistency Model (ERCM) for update propagation in Data Grid Environment2016 International Conference on Information Communication and Embedded Systems (ICICES)10.1109/ICICES.2016.7518894(1-7)Online publication date: Feb-2016
  • (2015)CaLibRe: A Better Consistency-Latency Tradeoff for Quorum Based Replication SystemsDatabase and Expert Systems Applications10.1007/978-3-319-22852-5_40(491-503)Online publication date: 11-Aug-2015

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media