Fact-checked by Grok 2 weeks ago

Eventual consistency

Eventual consistency is a in systems that guarantees if no new updates are made to a data item, all replicas will eventually reflect the last update, despite temporary inconsistencies due to asynchronous propagation of changes across nodes. This model prioritizes and partition tolerance over immediate consistency, allowing systems to remain operational even during network partitions or failures, as per the trade-offs outlined in the . The concept gained prominence through practical implementations in large-scale distributed databases, such as Amazon's , a key-value store designed for where updates reach all replicas asynchronously over time. In , eventual consistency is achieved using mechanisms like vector clocks to track versions and detect conflicts, with applications handling resolution during reads to maintain usability. This approach enables always-writeable storage, tolerating temporary inconsistencies that are acceptable for many applications, such as shopping carts or session data, where brief discrepancies do not impact overall functionality. Eventual consistency offers significant advantages in scalability and performance for and distributed systems, as it avoids the coordination overhead of stronger consistency models like , which can introduce . However, it requires careful application-level design to manage conflicts and ensure convergence, often through anti-entropy protocols like read repair or hinted handoff. Widely adopted in modern cloud infrastructures, it underpins services emphasizing over strict synchronization, influencing the (Basically Available, Soft state, Eventual consistency) paradigm as an alternative to traditional guarantees.

Fundamentals

Definition

Eventual consistency is a in systems that guarantees, if no new updates are made to a data object, all accesses to that object will eventually return the last updated value across all replicas after a period known as the inconsistency window. This model allows updates to propagate asynchronously among replicas, ensuring even in the presence of network partitions or failures, but it permits temporary inconsistencies where different replicas may reflect divergent states during concurrent operations. The core principle of eventual consistency involves weak consistency guarantees during periods of concurrent updates, where asynchronous replication can lead to temporary among replicas, yet the is designed to converge to a single, consistent state over time without further modifications. This approach contrasts with stronger models by accepting an unbounded delay in propagation, prioritizing responsiveness and over immediate . Eventual consistency ties closely to the , which posits that distributed systems can only guarantee two out of three properties—, , and partition tolerance—in the event of a ; it emphasizes and partition tolerance by relaxing immediate to achieve eventual convergence. For instance, in a like Amazon's , a write operation to one replica node propagates lazily to others via background processes, allowing subsequent reads from unaffected nodes to initially return stale data until completes.

Key Characteristics

Eventual consistency provides several optional session guarantees that enhance usability in distributed systems while maintaining weak properties. These client-centric guarantees, proposed for applications using replicated in systems like , ensure predictable behavior for clients interacting with replicas over sessions. Monotonic reads ensure that once a client reads a particular value from a replica, any subsequent reads by the same client will not return an older value, preventing the perception of time traveling backward in the state. This property relies on tracking session context to filter out stale responses. Monotonic writes guarantee that writes from a single client are applied in the order they are issued, maintaining from the client's perspective without requiring across replicas. This allows clients to issue updates sequentially while the system propagates them asynchronously. Read-your-writes consistency means that a client will observe its own recent writes in subsequent reads, either immediately or after a short delay, avoiding the of not seeing personal updates. This is achieved by associating session identifiers with operations to route or filter responses appropriately. Writes follow reads (also known as consistent prefix reads) ensure that a client's reads reflect writes in a consistent order, meaning the observed state includes all writes up to some point in the history seen by the client, without gaps or out-of-order updates relative to the client's actions. This provides a coherent aligned with the client's sequence of operations. These session guarantees are built upon the core property of eventual consistency: if no new updates occur, all replicas will converge to the same value after a finite time, as updates are eventually delivered to all replicas. This convergence supports , as emphasized in the , where eventual consistency favors partition tolerance and availability over immediate consistency.

Comparison to Other Models

Versus Strong Consistency

Strong consistency, exemplified by linearizability, is a correctness model for concurrent systems in which every operation appears to occur atomically at a single point in time between its invocation and response, preserving the real-time partial order of non-overlapping operations and ensuring that all reads reflect the most recent write. This guarantees immediate visibility of updates across all replicas, eliminating any possibility of stale reads during concurrent access. In comparison, eventual consistency permits temporary inconsistencies among replicas, where updates propagate asynchronously and converge to a consistent state only after a sufficient period without further modifications. The primary differences lie in their approaches to : eventual consistency avoids costly barriers like locking or protocols to prioritize low and high throughput, accepting brief divergences that resolve over time, whereas mandates immediate coordination, which can introduce delays and bottlenecks in distributed environments. These models embody fundamental trade-offs highlighted by the , which proves that in the presence of network partitions, a cannot simultaneously guarantee both (consistency) and , as maintaining often requires halting operations until completes. Eventual consistency, by contrast, favors and partition tolerance, enabling s to remain responsive even under failures, though at the expense of potential short-term inaccuracies. represents a middle ground, preserving cause-effect relationships without demanding full . A practical illustration arises in e-commerce inventory management: under strong consistency, a stock deduction from a purchase is atomically visible to all concurrent checkouts, preventing overbooking; eventual consistency might allow multiple sales to proceed on outdated stock views, leading to temporary overcommitments that are later reconciled via versioning or compensation. Historically, strong consistency emerged in early systems through transaction properties, which enforce atomicity, isolation, and durability to maintain a globally consistent view despite concurrent transactions.

Versus Causal Consistency

Causal consistency is a consistency model that preserves the happens-before relationships between operations in a distributed system, ensuring that causally related operations are observed in an order respecting their dependencies, without requiring a total global order across all operations. This partial ordering captures causal dependencies, such as operations within the same client thread, values read from prior writes, and transitive chains of such relations. In contrast to eventual consistency, which provides no guarantees on the order of operations beyond eventual convergence to a single value in the absence of new updates, causal consistency prevents anomalies like reading an effect before its cause (anti-causal reads). Eventual consistency allows replicas to temporarily diverge in ways that violate causality, such as a client observing a reply to a message before the original message itself, whereas causal consistency enforces a stricter partial order to improve usability while still permitting high availability and low latency. This added causal ordering in causal consistency requires mechanisms like dependency tracking but avoids the full coordination overhead of stronger models like linearizability. Causal consistency is preferable for applications where logical dependencies between operations are critical, such as social media feeds where users expect to see a post before its replies or comments, enabling more intuitive user experiences without sacrificing partition tolerance. Eventual consistency, however, suits scenarios prioritizing simplicity, high throughput, and minimal coordination, like or caching systems where temporary ordering anomalies are tolerable. For instance, in a messaging application, causal consistency ensures that a user's reply to a is not visible to other users until the original has been observed, maintaining the intuitive flow of ; under eventual consistency, the reply might appear first due to delays, leading to confusion until . emerged as a refinement of eventual consistency in early mobile and disconnected systems, notably in storage system, which used session-based ordering and dependency checks to enforce causal guarantees while allowing asynchronous update and application-specific .

Operational Mechanisms

Update Propagation

In eventual consistency systems, update propagation typically relies on asynchronous replication, where write operations are acknowledged to the client after being stored locally on the coordinating , with subsequent dissemination to other replicas occurring in the background to prioritize over immediate . This approach allows the system to handle high throughput but introduces temporary inconsistencies until replicas catch up. Several techniques facilitate efficient update dissemination. Read-repair involves the coordinator detecting and correcting stale replicas during read operations by comparing versions and pushing the latest opportunistically, thus reducing the need for constant background . Hinted handoff addresses failures by temporarily storing updates on an available , which forwards them to the failed upon , minimizing during partitions. For large-scale anti-entropy , Merkle trees enable efficient detection of differences between replicas; by comparing hierarchical hash structures, s identify and exchange only divergent partitions, avoiding full scans. Gossip protocols, often integrated into anti-entropy mechanisms, promote the exponential spread of updates across replicas. In these protocols, nodes periodically select random peers to exchange state information—via push (sending updates), pull (requesting them), or push-pull variants—leading to rapid dissemination where each "infected" node propagates the update further, achieving full coverage in O(log n) expected steps for n nodes. Tunable parameters, such as gossip frequency or the number of dissemination attempts (e.g., k=3–5 retries per update), allow balancing convergence speed against network overhead, with higher values reducing residual inconsistencies but increasing load. The speed of is influenced by several factors, including network latency, which can delay exchanges in geographically distributed systems; the number of replicas, as more nodes extend the gossip fan-out but amplify coordination overhead; and the update rate, where bursts may saturate links and slow dissemination. To track versions during , systems commonly employ vector clocks or timestamps; vector clocks maintain a counter per replica, incrementing on local updates and merging maxima on receipt, enabling precise detection without a global clock.

Convergence Process

In eventual consistency models, occurs when all of a item reach the same , provided no new updates are made for a sufficient period. This condition allows pending propagations from prior updates to disseminate fully across the system, ensuring that subsequent reads return the most recent value regardless of which is queried. As defined in foundational work on distributed systems, eventual consistency guarantees that "if no new updates are made to the object, eventually all accesses will return the last updated value," with the "eventually" qualifier reflecting the asynchronous nature of the process. Anti-entropy mechanisms play a crucial role in facilitating by periodically synchronizing replicas to resolve any lingering divergences. These include techniques such as Merkle trees for efficient digest comparisons, which allow nodes to identify and exchange only differing data subsets during full syncs, minimizing bandwidth usage. In systems like Amazon Dynamo, anti-entropy is complemented by opportunistic methods like read repair, where inconsistencies detected during read operations trigger immediate updates to stale replicas, and hinted handoff, which temporarily routes writes to healthy nodes during failures for later delivery. Such mechanisms ensure that, absent ongoing updates, replicas systematically align through background processes like gossip protocols that propagate state changes lazily across the network. Unlike models, eventual consistency provides no hard time bounds for , instead offering probabilistic guarantees based on system parameters like latency and load. For instance, probabilistically bounded staleness () metrics ensure that with 99.9% probability, reads reflect updates within a specified window, such as 13.6 milliseconds in LinkedIn's deployment or 202 milliseconds at Yammer. These guarantees arise from the cumulative effect of and anti-entropy, where time is influenced by the inconsistency window—the duration from an update until all replicas are aware of it—but remains unbounded in the worst case due to potential failures. To handle failures and accelerate convergence without mandating it for pure eventual consistency, many systems employ quorums for reads and writes. By configuring the minimum number of replicas acknowledging writes (W) and responding to reads (R) such that W + R > N (total replicas), overlaps ensure that reads are more likely to access recent data, hastening unification even under partitions. In , for example, a common setup of N=3, R=2, W=2 provides tunable trade-offs, where quorums reduce the effective inconsistency window by prioritizing durable propagation, though pure eventual consistency relaxes this to allow W + R ≤ N for higher . The process can be illustrated as a timeline following quiescence (no new updates): initially, an update asynchronously to some replicas (t=0 to t1), creating temporary ; during t1 to t2, anti-entropy and read repairs detect and resolve discrepancies via digest exchanges or opportunistic syncs; by t2 onward, all reads unify on the final state, with probabilistic bounds estimating t2 based on system metrics. This sequence underscores how emerges from the interplay of and , yielding a stable, consistent view post-quiescence.

Conflict Handling

Detection Methods

In eventual consistency systems, versioning schemes such as are commonly employed to detect conflicts arising from concurrent updates. A is a consisting of a list of (node, counter) pairs that tracks the causal history of updates across replicas. When comparing two versions, if one vector clock's counters are less than or equal to those in another for all nodes (with at least one strict inequality), the first version causally precedes the second, indicating no conflict. However, concurrent writes are detected when the vector clocks are incomparable, meaning neither dominates the other—neither set of counters is entirely less than or equal to the other. This approach, as employed in systems like Amazon's , enables precise identification of divergences without relying on synchronized physical clocks. Conflict indicators in these schemes manifest as versions with overlapping but non-subsumed histories, where the causal dependencies partially intersect but do not form a . For instance, two versions might share updates from some nodes while diverging on others, signaling that independent concurrent modifications occurred. This detection relies on the logical ordering preserved by the vector clocks, allowing systems to flag potential inconsistencies before they propagate further. In practice, such indicators trigger application-specific handling to ensure . Read-repair mechanisms provide another key detection method, performed during read operations to identify and flag divergences among replicas. In this , a read request contacts multiple replicas (typically a ), compares their versions using timestamps or vector clocks, and detects inconsistencies if the returned data differs. Upon flagging a divergence—such as mismatched versions across nodes—the system initiates background repairs to synchronize the replicas, ensuring eventual consistency without blocking the read. This technique is widely used in storage systems like , where it opportunistically catches conflicts that evaded write-time checks. At write time, quorums offer a to detect and mitigate conflicts early by requiring a minimum number of replicas to acknowledge the update before completion. In tunable quorum configurations, parameters such as write quorum (W) and read quorum (R) are set such that W + R > N (where N is the total number of replicas), increasing the likelihood that concurrent writes intersect and are serialized or detected via version comparison. While not guaranteeing conflict-free operation in highly concurrent scenarios, this approach reduces the probability of undetected divergences by ensuring most writes overlap, as implemented in to balance and . Regarding detection accuracy, vector clocks achieve low false positive rates for concurrency identification due to their precise logical ordering, with no erroneous flagging of causally ordered updates in ideal conditions. However, practical implementations may introduce minor false positives from clock truncation or approximations to manage vector size, though production systems like report negligible impact on reconciliation efficiency.

Resolution Strategies

In eventual consistency systems, once conflicts are identified, resolution strategies determine how divergent replicas reconcile to a common state, ensuring convergence without requiring synchronous coordination. These strategies vary in complexity and trade-offs between simplicity, data preservation, and overhead, often leveraging timestamps, , or custom logic to select or merge updates. One common approach is the last-writer-wins (LWW) strategy, where each update is associated with a or , and the version with the most recent timestamp is selected, discarding others. This method, used in systems like , resolves conflicts by prioritizing the latest perceived update, promoting quick convergence at the cost of potential data loss if earlier updates are semantically important. LWW is particularly effective for simple data types like registers or counters where overwriting is acceptable, but it relies on reliable to avoid arbitrary decisions. To avoid data loss, multi-version concurrency control retains multiple versions of the data, allowing application logic to merge them rather than discarding any. Conflict-free replicated data types (CRDTs) exemplify this by designing operations that are commutative and idempotent, ensuring merges always yield the same result regardless of order—for instance, grow-only counters accumulate increments without overwrites. CRDTs, as formalized in foundational work, support strong eventual consistency by propagating all operations or states, enabling replicas to integrate updates independently. Application-level resolution delegates merging to developer-defined rules tailored to the domain, such as taking the union of sets or applying custom heuristics for documents. This flexibility accommodates complex semantics but requires careful design to guarantee convergence, often building on CRDT primitives for foundational guarantees. These strategies involve trade-offs: LWW offers simplicity and low storage overhead but risks losing concurrent updates, while CRDT-based multi-version approaches preserve all information at the expense of increased complexity and space requirements for tracking versions or operations. For example, in collaborative text editing systems, merges operational transformations or CRDTs to integrate concurrent insertions without overwrites, maintaining document integrity across replicas.

Variants and Extensions

Strong Eventual Consistency

Strong eventual consistency is a variant of eventual consistency that adds the guarantee of strong convergence: if no new updates are made and all replicas have received the same set of updates, they will immediately be in equivalent states. This model maintains the of eventual consistency while ensuring deterministic agreement among replicas without requiring synchronous coordination or . The guarantees of strong eventual consistency include the core properties of eventual consistency—such as eventual visibility of updates across all replicas—along with strong where replicas applying the same updates reach identical s. This ensures to a consistent without , as formalized in work on replicated systems emphasizing termination, eventual delivery, and . In practice, strong eventual consistency is commonly implemented using conflict-free replicated data types (CRDTs), which employ commutative and associative operations to guarantee without explicit . Techniques such as vector clocks may be used to track , but the focus is on operation designs that inherently avoid conflicts. This variant was formalized in as a model for highly available replicated systems, balancing with the trade-offs imposed by the by prioritizing availability.

Read-Your-Writes Consistency

Read-your-writes consistency is a client-centric guarantee within eventual consistency models, ensuring that a or client that performs an update on a item will subsequently observe that updated value in its own reads, avoiding the of seeing stale from its own actions. This property addresses a common issue in distributed systems where replicas may lag, but it applies only to the updating client and does not extend to other clients or . To achieve read-your-writes, systems often employ client affinity, directing a client's reads and writes to the same or a consistent subset of replicas to ensure the update is visible immediately. Alternatively, mechanisms like session can be used, where a client receives a upon writing that encodes the update's or ; subsequent reads include this , allowing the to route the request to a that has applied the update or to block until . These approaches maintain eventual consistency across the while providing this targeted guarantee without requiring global synchronization. In practice, read-your-writes enhances in applications such as shopping carts, where a user adding an item expects to see it reflected in their immediate view of the cart, preventing confusion from temporary inconsistencies. However, it does not guarantee ordered visibility across multiple clients—for instance, one user's write may not be immediately seen by another—and remains fundamentally eventual for non-updating observers, potentially leading to temporary discrepancies. Under the , read-your-writes represents a that boosts during operations (the "E" case) at the potential cost of increased , as systems may need to wait for replicas to catch up or enforce session stickiness, without compromising during partitions. This makes it a practical extension to base eventual consistency.

Applications and Implications

Real-World Implementations

, introduced by in 2007, is a highly available key-value store that employs eventual consistency to prioritize and partition tolerance over strict . It uses a gossip-based protocol for disseminating updates across replicas and allows tunable consistency through configurable read () and write () quorums, where setting + > (with as the number of replicas) ensures that reads eventually reflect recent writes under conditions. Apache Cassandra, an open-source distributed database inspired by Dynamo, implements eventual consistency via timestamp-based versioning and mechanisms like hinted handoffs and read repairs to propagate updates asynchronously across replicas. It provides tunable consistency levels for reads and writes, such as QUORUM, which requires responses from a majority of replicas (defined as RF/2 + 1, where RF is the replication factor) to overlap read and write sets and guarantee eventual convergence. Riak, a NoSQL key-value store developed by Basho Technologies, was originally built around eventual consistency to support in distributed environments, allowing reads to potentially return stale during partitions but converging to the latest state over time through anti-entropy processes. Similarly, Project Voldemort, formerly a distributed data store created by in and modeled after , used vector clocks for versioning and eventual consistency, with configurable replication (N), read (R), and write (W) parameters to balance and , ensuring that if R + W > N, the system achieves guarantees while defaulting to eventual under failures. The (DNS) exemplifies eventual consistency outside databases, where updates to name records propagate asynchronously through a of authoritative servers and caches, with resolvers eventually converging to the latest records as time-to-live () values expire, without immediate global synchronization. Eventual consistency traces its roots to the 1990s project at Xerox PARC, which pioneered weakly connected replicated storage for mobile applications using application-specific and anti-entropy protocols to ensure replicas converge over time. This concept evolved into modern services, such as , which initially relied on eventual consistency for operations like deletes and overwrites to maintain but transitioned to strong read-after-write consistency in 2020.

Advantages and Limitations

Eventual consistency offers several key advantages in distributed systems, particularly in environments where and performance are prioritized over immediate data uniformity. By allowing asynchronous propagation of updates, it ensures that systems remain operational even during network partitions, enabling continuous read and write without blocking operations for global synchronization. This model supports across partitioned networks, as replicas can handle requests independently, facilitating and in large-scale deployments. Furthermore, it enables high write throughput, as updates can be accepted locally without requiring coordination across all nodes, which is essential for applications with heavy write loads such as feeds or inventories. Despite these benefits, eventual consistency introduces notable limitations that can impact system reliability and development effort. A primary drawback is the potential for stale reads, where clients may receive outdated data during the convergence period before all replicas synchronize, leading to temporary inconsistencies that could confuse users or affect decision-making. Additionally, handling conflicts arising from concurrent updates adds significant complexity to application logic, as developers must implement resolution strategies like last-writer-wins or custom merging, which can be error-prone and difficult to debug in distributed environments. These challenges are exacerbated in scenarios requiring precise ordering or atomicity, such as financial transactions, where even brief inconsistencies could result in monetary errors or regulatory violations, making eventual consistency unsuitable for such high-stakes use cases. To mitigate these limitations, eventual consistency models often incorporate tunable quorums, allowing system designers to adjust read and write thresholds (e.g., via parameters , , ) to balance consistency strength with , approaching stronger guarantees when needed without fully sacrificing performance. Benchmarks demonstrate that this approach can yield significantly higher write throughput and lower latencies compared to models; for instance, systems using eventual consistency have shown 16.5% to 59.5% faster response times at the 99.9th percentile in real-world deployments. This flexibility aligns with the CAP theorem's trade-offs, prioritizing and partition tolerance over strict consistency in distributed settings.

References

  1. [1]
    [PDF] De-mystifying “eventual consistency” in distributed systems - Oracle
    Eventual consistency is not meaningful or relevant in centralized (single copy) systems since there's no need for propagation. Various distributed systems ...
  2. [2]
    [PDF] Perspectives on the CAP Theorem - Research
    Brewer first presented the CAP Theorem in the context of a web service. A web service is implemented by a set of servers, perhaps distributed over a set of ...
  3. [3]
    [PDF] Dynamo: Amazon's Highly Available Key-value Store
    This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an “ ...
  4. [4]
    Eventually Consistent - All Things Distributed
    Dec 19, 2007 · The most popular system that implements eventual consistency is DNS, the domain name system. Updates to a name are distributed according to a ...
  5. [5]
    [PDF] CAP Twelve Years Later: How the “Rules” Have Changed
    The. CAP theorem's aim was to justify the need to explore a wider design space—hence the “2 of 3” formulation. The theorem first appeared in fall 1998. It was ...
  6. [6]
    Session guarantees for weakly consistent replicated data
    Session guarantees for weakly consistent replicated data. Abstract: Four per-session guarantees are proposed to aid users and applications of weakly consistent ...
  7. [7]
    Linearizability: a correctness condition for concurrent objects
    This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of ...
  8. [8]
    [PDF] Principles of Eventual Consistency - Microsoft
    We introduce a stronger definition of eventual consistency in §5.1, which does not suffer from these limitations. 4.3 Replicated Data Types. Sequential state ...
  9. [9]
    [PDF] Brewer's Conjecture and the Feasibility of
    Seth Gilbert*. Nancy Lynch*. Abstract. When designing distributed web services, there are three properties that are commonly desired: consistency, avail ...
  10. [10]
    [PDF] Scalable Causal Consistency for Wide-Area Storage with COPS
    Sep 6, 2011 · To our knowledge, this paper is the first to name and formally define causal+ consistency. Interestingly, several previous systems [10, 41].
  11. [11]
    [PDF] A Short Primer on Causal Consistency - USENIX
    Causal consistency is a better-than-eventual consistency model that still allows guaranteed low latency operations. It captures the causal relationships ...Missing: seminal | Show results with:seminal
  12. [12]
    [PDF] Bolt-on Causal Consistency - Peter Bailis
    Apr 22, 2013 · In this work, we consider the relationship between two important weak consistency models: eventual consistency and causal consistency. Eventual ...Missing: seminal | Show results with:seminal
  13. [13]
    [PDF] Managing Update Conflicts in Bayou, a Weakly Connected ...
    This paper presents the motivation for and design of these mechanisms and describes the experiences gained with an initial implementation of the system. 1.
  14. [14]
    [PDF] epidemic algorithms for replica'ted database maintenance
    It is possible to replace complex deterministic algorithms for replicated database consist,rncy with simple randomized al- gorithms t.hat rquirc few ...
  15. [15]
    [PDF] Time, Clocks, and the Ordering of Events in a Distributed System
    A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is ...
  16. [16]
    [PDF] Eventually consistent - CMU 15-799
    The most popular system that implements eventual consistency is the domain name system (DNS). Updates to a name are distributed ac- cording to a configured ...Missing: origin | Show results with:origin
  17. [17]
    Eventual Consistency Today: Limitations, Extensions, and Beyond
    May 1, 2013 · This article begins to answer this question by describing several notable developments in the theory and practice of eventual consistency.
  18. [18]
    [PDF] Optimistic replication - cs.wisc.edu
    Several state- transfer systems use vector clocks to detect conflicts, defining any two concurrent updates to the same object to be in conflict. Vector ...
  19. [19]
    [PDF] Cassandra - A Decentralized Structured Storage System
    Sep 18, 2009 · Cassandra - A Decentralized Structured Storage System. Avinash ... Update conflicts are typically managed us- ing specialized conflict resolution ...
  20. [20]
    Global tables: How it works - Amazon DynamoDB
    To help ensure eventual consistency, DynamoDB global tables use a last writer wins reconciliation between concurrent updates, in which DynamoDB makes a best ...
  21. [21]
    [PDF] Conflict-free Replicated Data Types
    Abstract: Replicating data under Eventual Consistency (EC) allows any replica to accept updates without remote synchronisation. This ensures performance and ...
  22. [22]
    Eventual Consistency Today: Limitations, Extensions, and Beyond
    Apr 9, 2013 · ... winning" value, often using a simple rule such as "last writer wins" (e.g., via a clock value embedded in each write).22. Suppose you want to ...
  23. [23]
    [PDF] A Consistency in Non-Transactional Distributed Storage Systems
    Session guarantees. Session guarantees were first described by Terry et al. [1994]. Although originally de- fined in connection to client sessions, session ...
  24. [24]
    [PDF] Conflict-free Replicated Data Types ? - ASC
    Under a formal Strong Eventual Consistency (SEC) model, we study suf- ficient conditions for convergence. A data type that satisfies these con- ditions is ...<|control11|><|separator|>
  25. [25]
    Eventually Consistent - Communications of the ACM
    Jan 1, 2009 · Eventual consistency. This is a specific form of weak consistency; the storage system guarantees that if no new updates are made to the ...
  26. [26]
    Consistency level choices - Azure Cosmos DB | Microsoft Learn
    Sep 3, 2025 · Azure Cosmos DB has five consistency levels to help balance eventual consistency, availability, and latency trade-offs.Missing: relational | Show results with:relational
  27. [27]
    [PDF] Consistency Tradeoffs in Modern Distributed Database System Design
    read-your-writes consistency, these are none- theless reduced consistency ... Giving up both Cs in PACELC makes the design simpler; once a system is.<|control11|><|separator|>
  28. [28]
    Dynamo: amazon's highly available key-value store
    This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide ...
  29. [29]
    Dynamo | Apache Cassandra Documentation
    Data Versioning. Cassandra uses mutation timestamp versioning to guarantee eventual consistency of data. Specifically all mutations that enter the system do so ...Dataset Partitioning... · Multi-master Replication... · Distributed Cluster...
  30. [30]
    Strong Consistency - Riak Documentation
    Riak was originally designed as an eventually consistent system, fundamentally geared toward providing partition (i.e. fault) tolerance and high read and write ...
  31. [31]
    [PDF] Project Voldemort
    Nov 19, 2009 · ▫ by sacrificing Strict Consistency to Eventual consistency. ▫ Consistency Models. – Strict consistency. ▫ 2 Phase Commits. ▫ PAXOS ...Missing: paper | Show results with:paper
  32. [32]
    Amazon S3 Strong Consistency
    Amazon S3 delivers strong read-after-write consistency automatically for all applications, without changes to performance or availability.
  33. [33]
    Eventual consistency today
    When a server performs a write to its local key-value store, it can send the write to all oth- er servers in the cluster. This write- forwarding becomes the ...Missing: throughput 10x