Fact-checked by Grok 2 weeks ago

Consistency model

A in defines the guarantees provided by a concurrent or distributed regarding the ordering and of operations on shared data across multiple processes or nodes, specifying which execution histories are permissible to ensure predictable behavior. These models establish a between programmers and the , wherein adherence to certain programming disciplines results in outcomes equivalent to some sequential execution order consistent with the program's specified dependencies. Consistency models are fundamental to the design and analysis of , multiprocessor systems, and distributed databases, where they balance correctness with performance trade-offs such as latency and availability. Stronger models, like —which requires operations to appear atomic and respect real-time ordering—and , introduced by in 1979 as an interleaving of process operations executed in sequence, provide robust guarantees but impose higher synchronization costs. Weaker models, such as (preserving cause-effect relationships without a ) and (ensuring replicas converge after sufficient time without further updates), enable greater scalability and fault tolerance in large-scale systems like databases, though they may allow temporary inconsistencies. The choice of consistency model depends on application requirements, with formal verification tools and testing frameworks like Jepsen used to validate system adherence. Over time, research has expanded to include session-based guarantees (e.g., read-your-writes) and hybrid approaches to address modern challenges in geo-replicated environments.

Fundamentals

Definition

In distributed and systems, a consistency model is a that specifies the allowable orderings of operations—such as reads and writes—on shared across multiple concurrent processes, ensuring predictable behavior by defining constraints on visibility and ordering without requiring all operations to be globally . These models arise from the need to manage concurrency challenges, such as conditions where the outcome of interleaved operations depends nondeterministically on their relative timing, by establishing partial orders that reconcile and views of . Consistency models differ from atomicity, which ensures that individual operations appear to occur instantaneously at a single point in time without interleaving, by instead focusing on the propagation and perception of operation effects across distributed replicas or processors. They also contrast with in transactional systems, which prevents concurrent transactions from interfering with one another as if executed sequentially, whereas consistency models emphasize non-transactional and the rules for when updates become visible to observers. A basic illustration occurs in a single-writer/multi-reader , where one performs a write to shared data, and multiple other read it; the consistency model dictates the conditions under which readers will observe the updated value, potentially allowing temporary discrepancies in visibility until the write propagates according to the specified ordering rules. represents a strong example of such a model, requiring that all see operations in a single global consistent with each process's individual program order.

Historical Development

The development of consistency models originated in the 1970s amid challenges in early multiprocessor systems, particularly issues arising from caching and access in concurrent environments. introduced the concept of in his seminal 1979 paper, defining it as a model where the results of any execution appear to be the same as if the operations of all processors were executed in some sequential order consistent with each processor's program order. This formulation addressed the need for a formal guarantee of correctness in multiprocessor computers, providing a strong baseline for reasoning about parallel program behavior without requiring strict hardware synchronization at every memory access. During the 1980s and 1990s, researchers advanced weaker consistency models to improve performance in shared-memory multiprocessors by relaxing some ordering constraints while preserving essential correctness properties. James R. Goodman proposed processor consistency in 1989, which enforces program order for reads and writes from the same processor but allows out-of-order visibility of writes across processors, enabling more aggressive caching and pipelining. Building on this, Kourosh Gharachorloo, Daniel Lenoski, James Laudon, Phillip Gibbons, Anoop Gupta, and John Hennessy introduced release consistency in 1990, distinguishing between ordinary and synchronization operations (acquire/release) to further buffer writes and reduce communication overhead in scalable systems. The 2000s marked a shift toward even more relaxed models, driven by the proliferation of distributed systems and heterogeneous architectures, where strict consistency proved too costly for scalability. In distributed databases, Amazon's system popularized in 2007, allowing temporary inconsistencies with guarantees of convergence under normal conditions to prioritize availability and partition tolerance. Concurrently, hardware architectures adopted weaker models; for instance, formalized its relaxed model in the 2011 Architecture Reference Manual, permitting reordering of memory operations except those bounded by explicit barriers, to optimize for power-efficient and devices. By the and into the , consistency models have integrated deeply with modern hardware and cloud-native infrastructures, emphasizing tunable trade-offs for diverse workloads up to 2025. Intel's x86 architecture implements Total Store Order (TSO) as its default model, where stores from a are seen in order by others but loads may overtake stores, formalized in vendor documentation and analyzed in rigorous models. Similarly, AMD's AMD64 architecture adopts a comparable relaxed ordering, with extensions for primitives to support . In cloud-native environments, tunable consistency—allowing applications to select levels from strong to eventual—has become standard, influenced by systems like and enabling elastic scaling in architectures.

Key Terminology

In the context of consistency models, operation ordering refers to the constraints imposed on the sequence of accesses across multiple es or processors. Program denotes the sequential arrangement of operations as defined within a single , ensuring that each perceives its own instructions in the intended sequence. A total extends this to a global serialization, where all operations from every appear to execute in a single, unambiguous sequence as if issued by one at a time, as exemplified in strict models. In contrast, a partial permits certain relaxations, allowing operations—such as writes to independent locations—to be reordered or observed out of program without violating the model's guarantees. Visibility and propagation describe how and when the effects of a write become to other processes. Visibility occurs when a write's updated value is accessible to subsequent reads by any process, often delayed by hardware mechanisms that must propagate changes across the system. Propagation ensures that this value is disseminated reliably, typically requiring acknowledgment from caches or memory. Synchronization points, such as explicit instructions in the , serve as critical junctures where visibility is enforced, guaranteeing that prior writes are complete and visible before subsequent operations proceed. Linearizability and serializability are distinct correctness criteria for concurrent systems, both aiming to provide intuitive ordering but differing in their constraints. Linearizability imposes real-time ordering, ensuring that each operation appears to take effect instantaneously at a single point between its invocation and response, preserving the partial order of non-overlapping operations while allowing concurrency. Serializability, however, requires only that the overall execution be equivalent to some serial (non-concurrent) execution of the operations, without enforcing real-time constraints on individual operations' timing. Linearizability is thus a stricter form, often viewed as a special case of strict serializability for single-operation transactions. Key terms in consistency models include the following:

Memory Consistency Models

Strict Consistency

Strict consistency represents the strongest form of memory consistency, requiring that all memory operations appear to occur instantaneously at unique points in global . Under this model, a read operation on a memory location must always return the value produced by the most recent write to that , where recency is determined by an absolute global time order. This equivalence holds as if the system operates with a single, global memory visible to all processes without any delay or caching effects. The key properties of strict consistency include absolute time ordering of all shared memory accesses and the prohibition of any buffering, caching, or reordering of operations that could violate visibility. Writes must propagate instantaneously across the system, ensuring that no observes an outdated value after a write has occurred in global time. This model demands perfect among all processors, making it theoretically ideal for maintaining a consistent view of but challenging to achieve in practice due to communication latencies in shared-memory multiprocessors. For example, consider a shared x initialized to 0. If process P1 executes a write x = 1 at global time t_1, then any subsequent read of x by process P2 at time t_2 > t_1 must return 1, irrespective of the processors' physical separation or network delays. This immediate visibility ensures no temporal anomalies in observed values. However, the stringent requirements of strict consistency impose significant performance limitations, as implementing instantaneous propagation necessitates excessive synchronization overhead, such as frequent barriers or locks across all processors. As a result, it is rarely fully implemented in modern shared-memory systems, where weaker models like provide sufficient guarantees with better scalability. Formally, strict consistency requires that the set of all memory operations across processes forms a total order that is consistent with the real-time partial order, meaning non-overlapping operations respect their initiation and completion times in a linearizable manner. This linearizability condition ensures that each operation appears to take effect atomically at a single point between its invocation and response, preserving causality and order in real time.

Sequential Consistency

Sequential consistency is a memory consistency model introduced by Leslie Lamport in 1979, defined such that the result of any execution of a multiprocessor program is the same as if the operations of all processors were executed in some sequential order, with the operations of each individual processor appearing in the program's specified order within that global sequence. This model ensures that all memory accesses appear atomic to the programmer and that there exists a single, total order of all operations that respects the per-process program order. Key properties of sequential consistency include the preservation of program order for operations within each process, meaning that if one operation precedes another in a process's code, it will precede it in the global order. Additionally, the model establishes a global sequential order for all operations across processes, but this order is not tied to real-time constraints, allowing concurrent operations to be serialized in any valid interleaving without requiring instantaneous visibility. Sequential consistency is weaker than strict consistency, which demands that operations be ordered according to absolute real time. A representative example involves two processes communicating via shared flags to illustrate the model's guarantees. Process P1 executes flagX = 1 followed by flagY = 1, while Process P2 executes a loop checking while (flagY == 0); and then reads flagX. Under , if P2 exits the loop and observes flagY = 1, it must also observe flagX = 1, as the global order preserves P1's program order and ensures a consistent visible to all processes. This prevents anomalous outcomes, such as P2 seeing flagY = 1 but flagX = 0, which could occur under weaker models. In practice, can be achieved through mechanisms like barriers or locks that enforce ordering at key points, serializing operations to mimic a sequential . It forms the basis for the happens-before in the Memory Model, where properly synchronized programs—free of data races—exhibit , ensuring that actions ordered by (e.g., volatile writes and reads) appear in a consistent global sequence. Despite its intuitive appeal, sequential consistency imposes significant performance costs in large-scale systems, as it prohibits common hardware optimizations such as write buffering, operation reordering, and overlapping memory accesses, which are essential for in multiprocessors with caches and pipelines. These restrictions limit parallelism and increase , making it challenging to implement efficiently in modern distributed or multicore architectures without compromising throughput.

Causal Consistency

Causal consistency is a memory consistency model in systems that ensures all processes observe causally related s in the same order, while allowing independent s to be reordered across processes. Specifically, if A happens-before B due to a causal —such as one reading the result of the other or being in the same thread of execution—then every process sees A before B; however, operations without such dependencies may appear in varying orders to different processes. This model combines per-process , where operations within a single appear in program order, with a global causal order for dependent operations, making it weaker than , which requires a single for all operations across all processes. As a result, permits greater concurrency and performance by relaxing constraints on unrelated events, while still preserving intuitive notions of cause and effect in applications. It is stronger than weaker models like but avoids the overhead of full . A representative example is a distributed chat system where a user posts a (operation A), and another user replies to it (operation B, causally dependent on A via a read of A). Under , all users see the reply after the original message, but unrelated messages from other conversations can interleave in different orders for different observers, enhancing responsiveness without violating . Formally, causal consistency relies on the happens-before relation to identify dependencies, originally defined using logical clocks to capture potential in distributed systems. Implementations often employ Lamport clocks, which assign timestamps to events such that if A happens-before B, then the clock of A is less than that of B, enabling processes to track and enforce causal order during reads and writes. Vector clocks extend this by maintaining a of timestamps per process, providing a more precise partial order for detecting without assuming a . Causal consistency finds application in session-based applications, such as collaborative tools or user sessions in databases, where actions within a session (e.g., a sequence of reads and writes by a single client) must maintain causal dependencies, but concurrent sessions from different users can proceed independently for better . For instance, implements causal consistency at the session level to ensure ordered observations of dependent operations across distributed replicas.

Intermediate Consistency Models

Processor Consistency

Processor consistency, proposed by James R. Goodman in 1989, is an intermediate memory consistency model that provides a relaxation of sequential consistency to enable hardware optimizations like write buffering while preserving key ordering guarantees. In this model, all writes issued by a given processor are observed in program order by every processor in the system, ensuring that if multiple writes from the same processor are visible to another processor, they appear in the issuing processor's order. However, a processor may observe the results of its own writes immediately, before those writes are propagated to and visible by other processors, allowing for the use of store buffers to hide memory latency. This distinction arises from treating reads and writes separately in terms of buffering, where reads from the issuing processor can bypass the store buffer. The core properties of consistency include maintaining program order for reads and writes independently on each and enforcing a consistent per- write serialization across all observers. Specifically, it combines —ensuring single-writer-multiple-reader semantics per memory location—with a pipelined (PRAM) store ordering, where all processors agree on the relative order of stores from any one . Unlike stricter models, it does not require a global for all memory accesses, permitting reordering of reads relative to writes from other processors as long as intra- write order is upheld. These properties enable efficient implementations in multiprocessor systems by allowing delayed write propagation without compromising the perceived order of a single 's operations. To illustrate, suppose processor P1 executes a write to variable A followed by a write to variable B. Under processor consistency, any other processor P2 that observes both writes will see the update to A before the update to B, respecting P1's program order. However, P2 might perform a read of A and obtain its old value if P1's write to A is still pending in P1's store buffer, while a subsequent read by P1 itself would return the new value from A. This example highlights how the model accommodates store buffering for performance, potentially leading to temporary inconsistencies in visibility across processors. Compared to , processor consistency is weaker because it serializes writes only on a per-processor basis rather than enforcing a single global interleaving of all operations from all processors, which can allow more flexible hardware designs at the cost of requiring programmers to handle potential reordering with explicit . Early implementations of processor consistency appeared in the V8 architecture via its Total Store Ordering (TSO) model, which defines similar semantics as the default for both uniprocessors and shared-memory multiprocessors, permitting write buffering while ensuring ordered visibility of per-processor stores.

Pipelined RAM Consistency

Pipelined RAM consistency, also known as PRAM or FIFO consistency, is a memory consistency model in which all processors observe the writes issued by any individual processor in the same order that those writes were performed by the issuing processor, irrespective of the memory locations involved. This model ensures that the sequence of writes from a single source is preserved globally, but the relative ordering of writes from different processors can vary across observers, allowing for interleaving in arbitrary ways. Unlike stricter models such as sequential consistency, PRAM permits optimizations like buffering and pipelining of memory operations to improve performance in multiprocessor systems, as long as the per-processor write order is maintained. The key properties of PRAM include constant-time reads from local caches and serialized broadcasts for writes, which enable scalability by reducing contention on shared memory accesses. It is weaker than processor consistency in that it does not enforce a global write serialization across all processors for operations to different addresses, allowing greater reordering for cross-processor and cross-location interactions while still providing per-processor ordering. This relaxation supports hardware mechanisms like cache coherence protocols, which ensure that updates to the same location propagate correctly but do not impose stricter global ordering. For instance, consider two processors P1 and P2: P1 performs a write to location x followed by a write to location y, while P2 performs a write to location x. Under PRAM, all processors will see P1's write to x before its write to y, but one processor might observe P2's write to x after P1's write to y, while another observes it before. Historically, PRAM was proposed in the late to address performance bottlenecks in shared-memory multiprocessors, particularly for vector processors where pipelined access patterns are common, by modeling memory as a scalable broadcast-based system without full . Limitations of PRAM arise in scenarios requiring causal relationships across processors, as it does not guarantee that causally related operations (e.g., a write followed by a read that enables another write) are observed in a consistent order globally, necessitating explicit primitives like barriers or locks for such dependencies.

Cache Consistency

Cache consistency, more precisely known as , is not a memory consistency model but a hardware-level mechanism in multiprocessor systems that supports such models by ensuring all processors observe a coherent view of locations across their private s. It addresses the challenge posed by caching, where multiple copies of the same data may exist, by enforcing protocols that propagate updates or invalidations to maintain uniformity for individual locations. This is achieved through write-invalidate or write-update strategies, where a processor's write to a location either invalidates remote cache copies or updates them, preventing stale data reads. Common implementations rely on either snooping-based or directory-based approaches. In snooping protocols, caches monitor a shared interconnect (such as a bus) for memory transactions and respond accordingly to maintain , as introduced in early designs for scalable multiprocessors. Directory-based protocols, in contrast, use a centralized or distributed directory to track the state and location of cached blocks, notifying relevant caches on modifications; this scales better for large systems without broadcast overhead. Both mechanisms uphold the single-writer-multiple-reader (SWMR) property, ensuring that only one cache holds a modifiable copy of a block at any time while allowing multiple read-only copies. A representative example involves a write operation: if processor P1 writes to memory location x, its cache controller issues an invalidation request, causing caches in other processors (e.g., P2) holding x to mark their copies as invalid; subsequent reads by P2 then retrieve the updated value from P1's cache or main via the interconnect. One widely adopted is MESI (Modified, Exclusive, Shared, Invalid), employed in modern multicore processors like those from . In MESI, cache lines transition between states—Modified for a uniquely held dirty copy, Exclusive for a uniquely held clean copy, Shared for multiple clean copies, and Invalid for non-present data—reducing unnecessary traffic by distinguishing clean shared data from modified versions. These hardware cache coherence protocols form the foundational layer for higher-level memory consistency models, such as , by guaranteeing that shared data appears atomic and ordered across processors for individual locations. They approximate the idealized goal of strict consistency for cached accesses while optimizing performance in real systems.

Weak and Relaxed Ordering Models

Weak Ordering

Weak ordering is a memory consistency model in which memory operations can be reordered freely by the and hardware, except at explicit points such as locks or barriers, ensuring that the system appears sequentially consistent only to programs that adhere to specified constraints. This model classifies memory accesses into operations, which may be reordered relative to other operations, and operations, which establish ordering boundaries and prevent reordering across them. The primary property of weak ordering is its provision of high performance through aggressive optimizations, as it allows processors to execute non-synchronized accesses out of , such as delaying writes in or permitting reads to bypass pending writes, thereby maximizing flexibility while requiring programmers to insert primitives like fences to enforce necessary . For instance, in a without , a write to one variable might be delayed in a write while subsequent reads or writes to unrelated variables proceed immediately, but upon reaching a point, all buffered operations are drained to ensure visibility to other processors. In the taxonomy of consistency models, is positioned as weaker than processor consistency, which maintains for writes to different locations but still imposes more restrictions on read-write interleaving; this relative weakness enables broader reordering opportunities and serves as a foundational basis for further relaxed models like release consistency. Implementations of are found in architectures such as ARMv8-A, where accesses lacking dependencies can be issued or observed out of unless barriers are explicitly used to impose .

Release Consistency

Release consistency is a memory consistency model that relaxes the ordering of ordinary accesses while using operations—specifically acquires and releases—to control the visibility and ordering of shared data updates. In this model, ordinary reads and writes to shared variables are not required to be ordered with respect to each other across unless constrained by synchronization points; however, an acquire operation ensures that the processor sees all writes that occurred before a corresponding on another processor, and a release ensures that subsequent acquires on other processors will see the writes performed before that release. This approach extends by explicitly distinguishing between synchronization accesses (acquires and releases) and ordinary accesses, allowing greater flexibility in hardware implementations while maintaining programmer control over critical sections. The model, introduced as an improvement over weak consistency, permits processors to buffer and reorder ordinary accesses freely, as long as synchronization points enforce the necessary ordering; for instance, writes within a need not propagate immediately but are guaranteed to be visible after the . Release consistency supports lazy mechanisms, where the propagation of updates from a processor's writes can be delayed until a operation, reducing inter-processor communication traffic compared to stricter models like , which require immediate visibility of all writes. This lazy propagation minimizes cache invalidations and coherence overhead, enabling better scalability in shared-memory multiprocessors. Release consistency has two main variants: RCsc (release consistency with sequential consistency for synchronization operations) and RCpc (release consistency with processor consistency for synchronization operations). RCsc requires that all acquire, release, and other special synchronization operations appear sequentially consistent across processors, meaning they are totally ordered and respect program order on each processor. In contrast, RCpc, which is more commonly implemented due to its relaxed nature, enforces only processor consistency among special operations, allowing reordering of special writes before special reads from different processors while still maintaining program order for acquires before ordinary accesses and ordinary accesses before releases. RCpc further permits ordinary reads to return values from writes that have not yet been released, providing additional optimization opportunities without violating the core visibility guarantees. A representative example of release consistency in action involves a shared lock protecting a critical section: a processor P1 performs writes to shared variables within the section after acquiring the lock, then releases the lock, making those writes visible to processor P2 only after P2 acquires the same lock, ensuring that P2 sees a consistent view of the data without requiring global ordering of all operations. This structured use of acquire and release points avoids the need for strict consistency on non-synchronized accesses, reducing latency in lock-unlock patterns common in parallel programs. The formal rules defining release consistency, particularly RCpc, are as follows:
  • R1 (Acquire rule): Before an ordinary read or write access is allowed to perform, all previous accesses on the same must have completed successfully.
  • R2 (Release rule): Before a release access is allowed to perform, all previous ordinary read and write accesses on the same must have completed.
  • R3 (Special ordering): and release accesses (special accesses) obey processor consistency, meaning writes to variables are seen in program order by other processors, but reads of variables may see stale values unless ordered by prior acquires.
  • R4 (Visibility rule): A successful guarantees that all writes performed before the corresponding release on another are visible to reads following the acquire, though ordinary accesses between points remain unordered.
These rules ensure that provides a barrier for propagation without imposing unnecessary global .

Entry Consistency

Entry is a consistency model designed for (DSM) systems, where shared becomes consistent at a only upon acquiring a object, such as a lock, that explicitly guards the relevant objects. This model requires programmers to associate specific variables with shared items, ensuring that updates to those items are propagated and visible exclusively to subsequent holders of the same object. Upon unlocking or releasing the object, any changes made under its protection may invalidate cached copies elsewhere, but is not enforced for unguarded . Key properties of entry consistency include the reduction of unnecessary invalidations and data transfers, as updates are tied directly to synchronization events rather than broadcast broadly. It employs ownership tracking mechanisms to manage exclusive (for writes) or shared (for reads) access modes, allowing systems to prefetch or transfer only the pertinent data during operations. For instance, if a lock L protects a shared x, a acquiring L will see all prior updates to x made under L, but changes to unrelated variables remain unaffected unless guarded by the same lock. This fine-grained approach minimizes communication overhead in DSM environments. Entry consistency offers advantages in scalability for shared systems by leveraging common patterns to cluster related data transfers, thereby reducing network traffic and misses compared to coarser models. In evaluations on DSM prototypes like , it demonstrated significantly fewer messages— for example, 24 versus 1,802 in a benchmark with two processors—leading to improved performance without requiring hardware support for stronger guarantees. Relative to release consistency, a coarser variant, entry consistency provides stronger guarantees for data protected by the same lock, as visibility is enforced precisely at acquisition for associated objects, while remaining weaker for non-synchronized accesses.

Platform-Specific Relaxed Models

Relaxed Write-to-Read Ordering

Relaxed write-to-read ordering is a class of memory consistency models that permit a read operation to bypass a preceding write from the same , allowing the read to potentially observe stale from before the write is visible in , while preserving the program order for all write-to-write and read-to-read operations across processors. This relaxation stems from implementations where writes are buffered locally before committing to the shared memory system, enabling reads to access main directly without waiting for the buffer to drain. A key property of these models is their use of safety nets, such as serialization instructions or fences, to enforce ordering when needed for synchronization; for instance, Total Store Ordering (TSO) relies on read-modify-write operations to ensure correctness in critical sections. They were prevalent in early relaxed architectures to balance performance gains from reordering with sufficient guarantees for programmability. Partial Store Order (PSO), an extension common in SPARC systems, maintains these write-to-read relaxations but introduces additional flexibility in write completion order across different locations, using explicit barriers like STBAR to restore total store ordering when required. Consider a executing the sequence: read from location A (r1), followed by write to A (w1). Under relaxed write-to-read ordering, r1 may return the pre-existing value in A rather than the value intended by w1, if the write remains in the processor's store buffer. This behavior contrasts with stricter models like , where such reordering is forbidden. In the of consistency models by Adve and Gharachorloo, relaxed write-to-read ordering occupies an intermediate position, stricter than —which further relaxes read-to-read and read-to-write constraints—but weaker than processor consistency, as it allows intra-processor write-to-read reordering. Implementations approximating this model include the x86 architecture's TSO, where loads may pass stores to different addresses, but stores maintain a and are not reordered with subsequent loads.

Alpha and PowerPC Models

The DEC Alpha architecture, introduced in the 1990s by Digital Equipment Corporation, employed a highly relaxed memory consistency model that permitted extensive reordering of memory operations to maximize performance in multiprocessor systems. Under this model, all types of memory accesses—loads and stores—could be reordered freely with respect to one another, including load-to-load, load-to-store, store-to-load, and store-to-store operations, except across explicit memory barriers. This full relaxation applied to operations on different memory locations, while the model preserved dependencies within a single processor and ensured write atomicity, meaning stores appeared atomic to other processors. To enforce ordering when necessary, programmers relied on two fence instructions: the memory barrier (MB), which serialized all prior and subsequent memory operations, and the write memory barrier (WMB), which ordered writes but allowed loads to pass. The Alpha model's extreme flexibility enabled aggressive hardware optimizations, such as and non-blocking caches, but it violated , potentially leading to counterintuitive behaviors in concurrent programs without barriers. For instance, a might observe a subsequent load completing before a prior store to a different becomes globally visible, requiring explicit for correct shared-memory programming. This approach influenced subsequent relaxed models by demonstrating the trade-offs between performance and programmability, with the architecture's reference manual specifying these rules to guide and hardware implementations. The PowerPC architecture, developed by , , and Apple in the early 1990s, adopted a similar relaxed (RMO) model, allowing free reordering of read-read, read-write, and write-read operations across different addresses to support high-performance pipelining and caching. In this model, memory operations lack inherent global ordering unless constrained by synchronization primitives, enabling stores to be buffered and loads to bypass prior writes, but preserving intra-processor data and address dependencies. Key properties include support for weak consistency, where sequential appearance is only guaranteed within a thread or across processors via explicit barriers, promoting optimizations like while requiring careful fence usage for . PowerPC's RMO emphasizes programmer responsibility through instructions like lwsync (lightweight ), which enforces ordering of prior stores before subsequent loads and stores within a but permits store-to-load reordering, and sync (full ), which serializes all memory accesses globally. For example, in a producer-consumer , a write to w1 followed by a read from a different r2 may result in r2 observing stale data before w1 is visible to other processors, unless an lwsync intervenes to establish release-acquire semantics. This model's high optimization potential is evident in its use for and applications, where it balances speed and . In version 3.1, released in 2020 with revisions up to 3.1C as of 2024 and still current as of 2025, new instructions enhance support for systems within the existing RMO model, including persistent variants like phwsync (persistent heavyweight sync) and plwsync (persistent lightweight sync) for storage ordering, along with cache management instructions such as dcbstps and dcbfps. These updates maintain and address evolving hardware needs, such as better support for and error handling in large-scale processors, without altering core reordering freedoms.

Client-Centric and Session Guarantees

Client-centric consistency models provide guarantees tailored to individual client sessions rather than global system-wide ordering, enabling scalability in distributed and replicated systems. These include session guarantees, which collectively ensure intuitive behavior within a client's interaction context. The four primary session guarantees, as defined in foundational work on weakly consistent replicated data, are monotonic reads, read-your-writes, writes-follow-reads, and monotonic writes.

Monotonic Read Consistency

Monotonic read consistency is a client-centric in distributed systems that ensures if a reads a value corresponding to a particular write, any subsequent reads by the same on the same data item will return that value or a more recent one, preventing the observation of older versions after a newer one has been seen. This property maintains a non-decreasing sequence of observed writes within a session, making the appear progressively more up-to-date from the client's perspective. Key properties include its focus as a per-process assurance, independent of other clients' operations, which avoids the overhead of global while still providing intuitive behavior for session-based interactions. It restricts the selection of servers for reads to those whose state includes at least the writes seen previously, often implemented using version vectors to track and compare write sets efficiently. For example, consider a querying a shared x initialized to 0; if it first reads x = 1 (reflecting a write W1), a later read must return 1 or a value from a write after W1, but never 0 again. In practical scenarios, such as an appointment calendar application, this prevents a scheduled meeting from disappearing after being viewed, as subsequent reads would not revert to an earlier server state lacking that update. This guarantee forms a core component of session consistency models in replicated systems like , a 1990s mobile computing platform designed for weakly consistent data sharing among disconnected replicas. Unlike , which enforces ordering across all related operations globally, monotonic read consistency is weaker and applies only within a single process's session, avoiding inter-client dependencies. It complements related session guarantees, such as read-your-writes consistency, by addressing read-read ordering rather than write-read interactions.

Read-Your-Writes Consistency

Read-your-writes consistency, also known as read-my-writes, is a client-centric guarantee in distributed systems that ensures a observes the effects of its own write operations in subsequent reads of the same item. Specifically, if a process executes a write W on data item x, followed by a read R on x, then R must return the value written by W or a value from a later write. This model applies per-session or per-process, restricting guarantees to operations within the same client context. The primary property of read-your-writes consistency is to prevent a client from encountering stale data resulting from its own updates, thereby enhancing in scenarios involving intermittent or replicated data stores. It is weaker than models, as it does not synchronize views across multiple clients, but it is stronger than pure by enforcing visibility of self-generated changes. This guarantee is foundational in systems like replicated storage architecture, where it layers atop read-any/write-any replication to maintain session-specific predictability without requiring global coordination. A practical example occurs in web applications, such as platforms, where a user posts a and immediately refreshes the feed; the post should appear without delay, avoiding the frustration of seeing an outdated view. In analogous terms, consider a scorekeeper in a game updating the score to 2-5; their next query must reflect at least this score or a later one, ensuring personal actions are consistently visible. To implement read-your-writes consistency, systems commonly use session tokens that capture the state of committed writes, allowing subsequent reads to be routed to replicas that include those updates, as seen in 's session consistency level. Alternatively, sticky routing or session affinity directs all client operations to the same replica, naturally providing the guarantee by avoiding cross-replica inconsistencies for that session, though it may limit load balancing. A key limitation is that read-your-writes offers no assurances for other clients or processes, which may continue to see pre-write values until replication propagates the changes.

Writes-Follow-Reads Consistency

Writes-follow-reads consistency, also known as session causality, is a client-centric guarantee in distributed systems that ensures a write operation by a process takes effect based on the value most recently read by that same process. Specifically, if a process reads a value v produced by write w_1 and later issues write w_2, then w_2 must be ordered after w_1 in the system's arbitration order, preventing w_2 from propagating on a stale version of the data. This property maintains dependency order for writes conditioned on prior reads within a session, making it particularly useful for incremental updates where subsequent operations build directly on observed data. It strengthens session guarantees by enforcing between reads and dependent writes, ensuring that the effects of a read are respected before related writes are applied globally. In conjunction with read-your-writes consistency, it forms a paired guarantee that preserves logical progression in client interactions. A representative example involves a reading the value of x = 5 and then writing y = x + 1; writes-follow-reads ensures y = 6 by basing the write on the read value, rather than a potentially stale earlier version of x. This avoids anomalies where updates appear inconsistent with the client's view, such as seeing replies to a post without first observing the original post itself. In practice, writes-follow-reads is applied in collaborative editing systems, where users' updates must depend on the document state they have viewed to maintain coherent incremental changes across replicas.

Monotonic Writes Consistency

Monotonic writes consistency is a client-centric guarantee in distributed systems that ensures writes issued by a process within a session are applied in order and visible to other clients only after all preceding session writes are incorporated. Specifically, if a process issues write W_1 followed by W_2 on the same data item, then for any replica, if W_2 is visible, W_1 must also be present and ordered before W_2. This property prevents later writes from a session from overtaking or being applied without earlier ones, maintaining the intended sequence of updates from the client's perspective and ensuring global visibility respects session order. It applies across clients, as it affects how writes propagate to other replicas, but remains scoped to the issuing session's dependencies. Implementation often involves tracking the session's write set and the application of new writes on the of prior ones, using mechanisms like write identifiers or version vectors. For example, in a with replicated files, a user saving version N followed by version N+1 ensures that N+1 replaces N at all servers without N overwriting N+1 later, avoiding version conflicts. This guarantee is integral to session consistency models like those in and modern systems such as , where it complements the other session guarantees by addressing write-write ordering within a session. Unlike global , it avoids full synchronization but enforces per-session monotonicity to support predictable update propagation in weakly connected environments.

Transactional and Local Models

Transactional Memory Consistency

Transactional memory consistency refers to the guarantees provided by transactional memory systems, where blocks of code executed as transactions appear to concurrent executions, ensuring that committed transactions maintain a consistent view of memory as if executed serially. In these systems, transactions provide , meaning reads within a transaction see a consistent of memory, and ensures that either all writes from a transaction are visible or none are. Consistency is typically achieved through , where the effects of committed can be ordered into a sequential execution that respects the real-time order of non-overlapping transactions. Implementations of transactional memory include optimistic software transactional memory (STM), which uses versioning and validation to detect conflicts, and hardware transactional memory (HTM), which speculatively executes transactions using hardware buffers for reads and writes. In both cases, conflict resolution occurs through abortion and rollback of overlapping transactions, preventing interference and maintaining isolation; for instance, if two transactions attempt to write to the same location, one aborts to resolve the contention. This optimistic approach contrasts with pessimistic locking but aligns with entry consistency as a lock-based analog by scoping updates to critical sections. A representative example involves two transactions: T1 reads variable x and writes to y, while T2 reads y and writes to x. Under transactional consistency, if T1 and T2 overlap, one must abort, ensuring that the committed execution appears as if T1 completed entirely before or after T2, preserving serializability without partial visibility of updates. Key correctness models for transactional memory include strong serializability, which linearizes all committed transactions, and opacity, a weaker but progress-oriented condition that additionally ensures no transaction reads values from aborted transactions while maintaining real-time ordering for committed ones. Opacity provides a balance between correctness and liveness, preventing livelock in high-contention scenarios by allowing transactions to read from committed states only. Modern hardware support includes Intel's (TSX), introduced in 2013 with Haswell processors but disabled by default on most CPUs since 2021 microcode updates due to security vulnerabilities (remaining available on select server processors), which implements restricted transactional via RTM instructions for optimistic execution, ensuring linearizable commits on success and full rollback on conflicts like capacity overflows or data contention. Similarly, ARM's Transactional Memory Extension (TME), part of the Armv9-A architecture introduced in 2021 and refined in subsequent versions, adds instructions like TSTART and TCOMMIT to enable hardware-managed transactions with isolation guarantees, aborting on memory conflicts to uphold atomicity and .

Local and General Consistency

Local consistency refers to a weak model in distributed systems where each individual or maintains for its own operations, typically ensuring that a observes its writes in the order they were issued, while providing no guarantees about the ordering or visibility of operations across different s. This per- sequential behavior allows local computations to proceed efficiently without waiting for global , making it suitable for systems prioritizing low-latency access at . For instance, in a multi- setup, A might execute its writes sequentially and immediately see the results locally, but B could observe those updates out of order relative to its own or in a delayed manner, potentially leading to temporary inconsistencies system-wide. The properties of emphasize over strict coordination, as nodes can continue operating independently during network partitions or high contention, with handled asynchronously if needed. It balances performance in large-scale environments by avoiding the overhead of stronger models, though it risks divergent views of until occurs. This model serves as a foundational weak , extending beyond traditional contexts to scenarios like , where local autonomy supports real-time processing without impeding overall system scalability. General consistency provides minimal system-wide assurances in distributed systems, ensuring that all replicas of a item become identical at some point after writes complete (equivalent to ), but without enforcing atomicity, ordering, or immediate visibility for reads and writes. Unlike implementations like bounded staleness—which guarantee that any read reflects a version no older than a configurable , such as a fixed number of preceding writes (e.g., up to operations) or a time (e.g., T seconds)—general consistency offers no such bounds on time. For example, in a globally replicated database, replicas will eventually converge if no further updates occur, but reads may see arbitrary stale versions in the interim, providing without predictable freshness. This model prioritizes and in partitioned networks, as nodes can serve reads from local copies, deferring full . Unlike transactional , which enforces and atomicity within defined blocks, general forgoes such mechanisms to enable broader applicability in non-transactional data stores and weak replication schemes. By focusing on eventual rather than bounded divergence, it extends weak principles to diverse contexts like , supporting scalable operations where occasional staleness is tolerable.

Distributed System Consistency

Eventual Consistency

Eventual consistency is a weak consistency model in distributed systems where replicas of data may temporarily differ, but if no new updates are made, all replicas will eventually converge to the same value, without any specified bounds on the time required for this convergence. This model arises from the need to balance and partition tolerance in large-scale systems, as per the , allowing updates to proceed even during network partitions. Key properties of eventual consistency include , as systems can accept writes and reads without waiting for global agreement, and tolerance for temporary inconsistencies that resolve over time. often relies on simple strategies like last-writer-wins, where the most recent update, based on timestamps or vector clocks, overwrites others, though this can lead to if concurrent writes occur. A classic example is DNS propagation: when a domain's is updated, caching resolvers worldwide may return outdated records briefly due to varying values and propagation delays, but all eventually reflect the new mapping once caches expire. Variants of eventual consistency provide targeted strengthening while retaining its core liveness property. Strong eventual consistency ensures convergence to a single value without conflicts, often by integrating guarantees like read-your-writes to prevent clients from observing stale data from their own sessions. Causal+ adds preservation of causal dependencies—such as ensuring a reply to a is seen after the original—while still guaranteeing eventual agreement on non-causally related updates. These variants can optionally incorporate client-centric guarantees for better . In practice, eventual consistency powers scalable databases like , which uses it to distribute data across commodity hardware while maintaining through tunable replication. However, critiques have noted challenges, particularly in handling update conflicts during periods of disconnection, as seen in early replicated systems where manual resolution was often required.

Data-Centric Consistency Models

Data-centric consistency models focus on providing global guarantees about the state and evolution of shared in replicated distributed systems, ensuring that all clients observe operations on the in a consistent manner regardless of their individual perspectives. These models emphasize server-side enforcement of operation ordering across replicas, treating the as a single logical entity where concurrent updates are resolved to maintain a coherent view for all participants. Unlike weaker approaches, data-centric models prioritize the of the data's history over per-client optimizations, making them suitable for applications requiring reliable global synchronization, such as financial systems. A fundamental property of data-centric consistency models is the enforcement of a total or partial order on operations, often through mechanisms like atomic broadcasts or timestamping, which ensure that all replicas apply updates in the same sequence. For instance, linearizability, one of the strongest such models, requires that each operation appears to occur instantaneously at a single point in real time between its invocation and response, preserving both a total order and real-time precedence among non-overlapping operations. This extends the notion of atomicity from single machines to distributed environments, allowing developers to reason about the system as if it were sequentially consistent with added temporal constraints. Sequential consistency, a related model, guarantees that the outcome of all operations matches some interleaving that respects the program order at each process, but without enforcing real-time ordering, thus providing a linearizable view of history without clock synchronization overheads. Weaker data-centric models, such as , preserve only the causal dependencies between operations—ensuring that if one operation causes another (e.g., a write followed by a read that propagates it), all processes see them in that order—while allowing concurrent operations to be reordered differently across replicas. PRAM (Pipelined ) consistency further relaxes this by guaranteeing that writes to the same data item from a single writer are observed in issuance order by all readers, akin to a FIFO queue per variable, but permitting arbitrary interleaving from different writers. These properties are typically implemented using logs or timestamps to serialize updates globally, ensuring, for example, that a write to variable x is visible to all clients in the uniform sequence relative to other writes on x. Data-centric models build upon shared memory consistency concepts by adapting sequential consistency for distributed replication, where physical separation of replicas necessitates protocols to simulate a unified memory view across networks. To balance strictness with performance in wide-area systems, tunable variants allow applications to specify staleness bounds, such as limiting the maximum number of intervening writes a read can miss, as explored in early work on application-controlled replication. Eventual consistency provides a weaker alternative focused on convergence without ordering guarantees, contrasting with the structured convergence demanded by data-centric models. For example, Google's Spanner database implements linearizability using synchronized clocks for global transactions.

Client-Centric Consistency Models

Client-centric models provide tailored guarantees from the perspective of an individual client or session in a distributed , ensuring that a client's sequence of operations appears consistent relative to its own prior actions without enforcing a global order across all clients. These models emerged as a response to the limitations of global consistency approaches in highly available, replicated data stores, where strict would compromise and . By focusing on per-client views, they allow each client to maintain a monotonic progression of reads and writes within their session, such as ensuring that once a value is read, subsequent reads by the same client do not return older values (monotonic reads). Key properties of client-centric models include the combination of session-specific guarantees like read-your-writes (where a client always sees its own recent writes), monotonic writes (where writes are applied in the order issued by the client), and writes-follow-reads (where a client's writes are ordered after its reads). These properties collectively ensure a coherent client experience while permitting weaker global consistency, such as eventual consistency across the system, to prioritize availability during partitions or high loads. Unlike data-centric models, client-centric approaches do not require uniform visibility for all participants, enabling optimizations like client-side caching or asynchronous replication that reduce latency without affecting other users' views. A representative example is an shopping cart, where a user adding an item to their cart expects to see that update in subsequent views (read-your-writes) and avoids seeing stale or overwritten states from their own actions (monotonic reads), even if other users experience delayed propagation of the same change due to . This per-client tailoring prevents anomalies like a user losing items from their cart mid-session, enhancing without imposing global costs. In web-scale systems, client-centric models are widely adopted to support high-throughput applications, such as feeds or collaborative editing tools, where session guarantees ensure intuitive interactions for millions of concurrent users. Post-2020 trends in architectures have emphasized hybrid variants, blending client-centric guarantees with eventual or to handle inter-service data flows; for instance, distributed caching systems allow per-request specification of consistency levels, combining monotonic session views for user-facing operations with relaxed propagation for backend analytics. This hybrid approach mitigates consistency challenges in environments, improving in cloud-native deployments.

Replication and Protocols

Consistent Ordering Protocols

Consistent ordering protocols are mechanisms employed in replicated distributed systems to guarantee that operations or messages are delivered and executed in the same total order across all replicas, thereby enabling strong consistency models such as sequential consistency, and supporting linearizability when augmented with real-time ordering. These protocols address the challenge of coordinating replicas without a shared clock or memory, ensuring that if one replica processes operation A before B, all others observe the same sequence. A foundational primitive in this domain is total order broadcast (also known as atomic broadcast), where messages are delivered reliably and in a consistent sequence to all recipients, preventing conflicts in state replication. Key properties of these protocols include agreement on the order via mechanisms such as sequence numbering or algorithms. For instance, protocols often use a sequencer to assign unique identifiers to messages, or leverage protocols like to achieve distributed agreement on the ordering without relying on a single point of coordination. In -based approaches, proposers, acceptors, and learners collaborate in phases to select and commit a value (e.g., a message order), tolerating failures as long as a of nodes remain operational. This ensures (all agree on one order) and liveness (progress under partial synchrony), critical for maintaining consistency in asynchronous environments. A classic example illustrating ordering principles is Lamport's bakery algorithm, originally designed for in shared-memory systems but extendable to distributed settings for ordering process requests. In the algorithm, each process picks a "ticket" number before attempting access and waits for lower-numbered tickets to complete, establishing a based on ticket values and process IDs to resolve ties. This approach has been generalized to build distributed state machines, where ticket-like sequencing ensures serialized execution across nodes. Consistent ordering protocols vary in design, particularly between centralized and distributed types. Centralized sequencers designate a single to assign sequence numbers to incoming messages before them, simplifying implementation and reducing coordination overhead. In contrast, distributed protocols, such as those in Viewstamped Replication, achieve ordering through view changes and agreement among replicas, avoiding a permanent central by electing primaries dynamically. Viewstamped Replication ensures by committing operations only after a acknowledges them in , handling crashes via reconfiguration. Performance trade-offs in these protocols balance , throughput, and . Centralized sequencers typically incur lower —often one round-trip for ordering—but introduce a , potentially halting the system if the sequencer crashes. Distributed protocols like -based ordering or Viewstamped Replication offer greater resilience, sustaining operation with up to f failures in 2f+1 replicas, but at the expense of higher from multiple communication rounds (e.g., 2-3 for Paxos basic phase) and reduced throughput under contention. These trade-offs are evident in evaluations where centralized approaches achieve higher message rates in low-failure scenarios, while distributed ones maintain consistency during partitions. Such protocols ultimately support data-centric consistency models by enforcing uniform update application across replicas.

Primary-Based Protocols

Primary-based protocols, also referred to as primary-copy or primary-backup replication, designate a single primary replica as the authoritative coordinator for all write operations on a replicated data item, with backup replicas maintaining passive copies. All client write requests are directed to the primary, which processes the update and subsequently propagates it to the backups to ensure replication. This centralization simplifies the enforcement of consistency models by confining write serialization to one site, avoiding the complexities of concurrent updates across multiple replicas. Key properties of primary-based protocols include strict at the primary site, where updates are immediately reflected, and the ability to support models like through ordered propagation. Reads may be served from any for performance, but to uphold consistency, they are typically routed to the primary or validated against it using timestamps or version numbers, preventing stale data access. is facilitated by electing a new primary from the backups upon detecting the current one's failure, preserving system availability without if a of replicas remain operational. These protocols integrate with consistent ordering mechanisms to guarantee that propagated updates respect the of operations. A representative example is the (GFS), where the master server functions as the primary for all metadata mutations, such as file namespace changes, and propagates these updates to backup masters for redundancy. For large file data stored in chunks, the master grants short-term leases to designate one chunkserver as the primary replica for coordinating writes among its backups, ensuring atomic appends and consistent replication across typically three copies. This design achieves high throughput for read-heavy workloads while maintaining through master replication and chunk lease mechanisms. Variants of primary-based protocols differ in update strategies and read handling. Eager synchronously forwards updates to backups (often a ) before acknowledging the write, providing but increasing latency due to coordination overhead. In contrast, lazy defers updates asynchronously, enhancing and at the risk of brief inconsistencies during read operations from backups. For reads, active variants allow distribution to backups after primary validation, balancing load while preserving ordering, whereas passive approaches restrict reads to the primary for simplicity. Fault tolerance in primary-based protocols relies on heartbeat mechanisms to monitor the primary's liveness, with backups collectively detecting failures within a timeout period. Upon failure, an election algorithm selects a new primary, typically requiring majority agreement to ensure the chosen replica has the most up-to-date state. The Raft consensus protocol exemplifies a modern implementation, using leader election with randomized timeouts and replicated logs to safely transfer the primary role, tolerating up to (n-1)/2 failures in a system of n replicas while maintaining linearizability. This approach has been widely adopted in systems like etcd and Consul for its clarity and efficiency in enforcing consistency.

Replicated-Write Protocols

Replicated-write protocols enable concurrent write operations across multiple replicas in distributed systems by disseminating updates to all or a of replicas and using -based mechanisms to achieve agreement on the write's success. A write is typically acknowledged only after a sufficient number of replicas (a write , W) confirm it, while reads require a read (R) such that W + R > N, where N is the total number of replicas; this intersection ensures that reads reflect recent writes under tunable parameters, allowing trade-offs between , , and . These protocols support multi-writer concurrency, distinguishing them from leader-serialized approaches. Such protocols balance with consistency guarantees by leveraging fault-tolerant consensus algorithms like or Atomic Broadcast (ZAB), which tolerate failures up to a minority of replicas while propagating writes synchronously or asynchronously. Conflicts arising from concurrent writes are often resolved through versioning schemes, where replicas compare vector clocks or timestamps to determine the latest update, merging or discarding divergent versions as needed. This design promotes availability during partitions, as writes can proceed if a is reachable, but may require anti-entropy mechanisms like read repair or hinted handoffs for full propagation. A prominent example is Amazon's , which uses replicated writes with configurable quorums (default W=2, R=2 for N=3) to provide , ensuring for key-value stores by allowing reads to return potentially stale data that converges over time through background and Merkle trees for reconciliation. In contrast, Google Spanner employs replicated writes coordinated via groups, augmented by the TrueTime API—which bounds clock uncertainty using GPS and atomic clocks—to assign timestamps that enforce linearizable (external) consistency, guaranteeing that transactions appear to take effect instantaneously at some point between invocation and response. Variants of replicated-write protocols include chain replication, which arranges replicas in a linear chain to enforce on writes: the head replica accepts and forwards updates sequentially to the tail, which confirms to the client, achieving with high throughput and via dynamic reconfiguration. Pessimistic variants employ upfront locking or two-phase commit to prevent conflicts, ensuring strict at the cost of , while optimistic variants allow concurrent writes and detect conflicts post-facto via validation, rolling back invalid ones to favor liveness in high-contention scenarios. Key challenges in replicated-write protocols revolve around efficient , particularly for non-commutative operations where concurrent updates may produce ambiguous states; Conflict-free Replicated Data Types (CRDTs) mitigate this by designing structures with monotonic operations and merge functions that guarantee convergence to a unique value regardless of update order, as formalized for commutative replicated data types in , enabling optimistic replication without coordination overhead.

References

  1. [1]
    Consistency in Non-Transactional Distributed Storage Systems - arXiv
    Dec 1, 2015 · We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these.
  2. [2]
    [PDF] How to Make a Correct Multiprocess Program Execute Correctly on ...
    It has been proposed that, when sequential consistency is not provided by the memory system, it can be achieved by a constrained style of programming.Missing: paper | Show results with:paper
  3. [3]
    None
    ### Summary of Consistency Models in Distributed Systems
  4. [4]
    Consistency Models - Jepsen
    A consistency model is a set of histories. We use consistency models to define which histories are “good”, or “legal” in a system. When we say a history “ ...
  5. [5]
    Consistency in Non-Transactional Distributed Storage Systems
    Marko Vukolić. Marko Vukolić ... As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.
  6. [6]
    [PDF] 0018-9340/79/0900-0690$00.75 C) 1979 IEEE - Microsoft
    methods for designing multiprocess algorithms cannot be relied upon to produce correctly executing programs. Protocols for synchronizing the processors must ...
  7. [7]
    [PDF] sequential consistency - Washington
    Manuscript received September 28, 1977; revised May 8, 1979. The author is with the Computer Science Laboratory, SRI International, Merks. Park, CA 94025.
  8. [8]
    [PDF] large-scale cache-coherent multiprocessors
    Computer Architecture, May 1989. [GoWo88] Goodman J. R.. and P. J. Woe& “The Wisconsin. Multicube: A New Large-Scale Cache-Coherent Mul- tiprocessor ...
  9. [9]
    [PDF] Memory Consistency and Event Ordering in Scalable Shared ...
    This paper introduces a new model of memory con- sistency, called release consistency, that allows for more buffering and pipelining than previously proposed ...
  10. [10]
    [PDF] Dynamo: Amazon's Highly Available Key-value Store
    Dynamo provides eventual consistency, which allows for updates to be propagated to all replicas asynchronously. A put() call may return to its caller before ...
  11. [11]
    [PDF] ARM® Architecture Reference Manual
    Jul 24, 2012 · The memory model, that defines memory ordering and memory management ... architecture profile defines a Protected Memory System Architecture (PMSA) ...
  12. [12]
    [PDF] Shared Memory Consistency Models: A Tutorial
    The most commonly assumed memory consistency model for shared memory multiprocessors is sequential con- sistency, formally defined by Lamport as follows [16].Missing: history | Show results with:history
  13. [13]
    [PDF] Shared Memory Consistency Models: A Tutorial - cs.wisc.edu
    A memory consistency model is a formal specification of memory semantics, defining how the memory system appears to the programmer, restricting read values.
  14. [14]
    Chapter 17. Threads and Locks
    Two actions can be ordered by a happens-before relationship. If one action happens-before another, then the first is visible to and ordered before the second.
  15. [15]
    A brief survey on replica consistency in cloud environments
    Feb 21, 2020 · This survey reviews major aspects related to consistency issues in cloud data storage systems, categorizing recently proposed methods into three categories.Missing: Marko | Show results with:Marko
  16. [16]
    [PDF] Shared Memory Consistency Models: A Tutorial - Computer
    The memory consistency model of a shared memory multiprocessor for- mally specifies how the memory system will appear to the programmer. Essentially, a memory ...
  17. [17]
    Causal memory: definitions, implementation, and programming
    Cite this article. Ahamad, M., Neiger, G., Burns, J.E. et al. Causal memory: definitions, implementation, and programming. Distrib Comput 9, 37–49 (1995) ...
  18. [18]
    [PDF] PRAM: A Scalable Shared Memory - cs.Princeton
    In the design of MIMD architecture caches, consistency requires a large processor state which in turn implies expensive context switching. The consequences of.
  19. [19]
    [PDF] Memory Consistency Models
    3.6 Processor Consistency (PC). Goodman proposed processor consistency in [Goo89]. Unfortunately, his definition is informal and caused a controversy as to ...
  20. [20]
    [PDF] Memory Consistency Models - Computer Science
    3.2 Sequential Consistency (SC). Sequential consistency was first defined by Lamport in. 1979 [Lam79]. He defined a memory system to be sequentially ...
  21. [21]
    Introduction to Consistency Models | Baeldung on Computer Science
    Mar 15, 2025 · A consistency model establishes a contract between a process and a system. Under this contract, the system guarantees that if a process follows ...
  22. [22]
    [PDF] A Primer on Memory Consistency and Cache Coherence, Second ...
    This chapter delves into memory consistency models (a.k.a. memory models) that define the ... The previous two chapters explored the memory consistency models ...
  23. [23]
    Coherency for multiprocessor virtual address caches
    A multiprocessor cache memory system is described that supplies data to the processor based on virtual addresses, but maintains consistency in the main memory, ...
  24. [24]
    [PDF] An Evaluation of Directory Schemes for Cache Coherence
    This paper shows that directory-based cache consistency schemes are an interesting approach for providing shared memory in a large-scale multiprocessor. The ...Missing: Essential | Show results with:Essential
  25. [25]
    Revisiting the Complexity of Hardware Cache Coherence and Some ...
    Dec 8, 2014 · This article revisits the complexity of hardware cache coherence by verifying a publicly available, state-of-the-art implementation of the ...Missing: original | Show results with:original
  26. [26]
    [PDF] Shared Memory Consistency Models: A Tutorial - Sarita Adve
    We discuss six models in this class: the weak ordering. (WO) model, two flavors of the release consistency model. (RCsc and RCpc), and three models proposed for ...<|control11|><|separator|>
  27. [27]
    [PDF] Weak Ordering - A New Definition+ - People @EECS
    This definition is analogous to that given by Lamport for sequential consistency in that it only specifies how hardware should uppeor to software. The ...
  28. [28]
    Memory ordering - Arm Developer
    Armv8-A implements a weakly-ordered memory architecture. This architecture permits memory accesses which impose no dependencies to be issued or observed.
  29. [29]
    Memory consistency and event ordering in scalable shared-memory ...
    This paper introduces a new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed ...
  30. [30]
    [PDF] The Midway Distributed Shared Memory System - DTIC
    May 14, 1993 · This paper describes the motivation, design and performance of Midway, a programming system for a distributed shared memory multicomputer. (DSM) ...
  31. [31]
    A unified theory of shared memory consistency - ACM Digital Library
    memory consistency model lattice · memory consistency models. Qualifiers ... Huang YWei H(2022)Incremental Causal Consistency Checking for Read-Write ...
  32. [32]
    Partial Store Ordering (PSO) (Writing Device Drivers)
    Partial Store Ordering (PSO) means the processor can reorder store, FLUSH, and atomic load-store instructions, so the memory order may differ from the issued  ...
  33. [33]
    [PDF] A Better x86 Memory Model: x86-TSO - University of Cambridge
    In August 2007, an Intel White Paper [12] (IWP) gave a somewhat more pre- cise model, with 8 informal-prose principles supported by 10 examples (known as litmus ...
  34. [34]
    [PDF] A Tutorial Introduction to the ARM and POWER Relaxed Memory ...
    ARM and IBM POWER multiprocessors have highly relaxed memory models: they make use of a range of hard- ware optimisations that do not affect the observable ...
  35. [35]
  36. [36]
    Session guarantees for weakly consistent replicated data
    Session guarantees for weakly consistent replicated data. Abstract: Four per-session guarantees are proposed to aid users and applications of weakly consistent ...
  37. [37]
    [PDF] Replicated Data Consistency Explained Through Baseball - Microsoft
    (Note: In other papers, this has been called “Read Your Writes,” but I have chosen to rename it to more accurately describe the guarantee from the client's ...
  38. [38]
    Writes Follow Reads - Jepsen
    Writes follow reads, also known as session causality, ensures that if a process reads a value v, which came from a write w1, and later performs write w2, ...
  39. [39]
    None
    Summary of each segment:
  40. [40]
    [PDF] Chapter 7: CONSISTENCY AND REPLICATION - UTSA
    Distributed Systems. Monotonic Reads. A data store is said to provide monotonic-read consistency if the following condition holds: ▫ If a process reads the ...
  41. [41]
    Transactional memory: architectural support for lock-free data ...
    Transactional memory: architectural support for lock-free data structures. Authors: Maurice Herlihy. Maurice Herlihy. View Profile. , J. Eliot B. Moss. J. Eliot ...
  42. [42]
    Transactional Memory Coherence and Consistency
    TCC providesa model in which atomic transactions are always the basicunit of parallel work, communication, memory coherence, andmemory reference consistency.TCC ...
  43. [43]
    On the correctness of transactional memory - ACM Digital Library
    On the correctness of transactional memory. Authors: Rachid Guerraoui. Rachid Guerraoui. EPFL, Lausanne, Switzerland. View Profile. , Michal Kapalka ... Guerraoui ...
  44. [44]
    [PDF] Improving In-Memory Database Index Performance with Intel R
    Section 3 describes the Intel Transactional Synchronization. Extensions, its implementation and programming interface, and provides key guidelines to ...
  45. [45]
    Overview of Arm Transactional Memory Extension - Arm Developer
    Mar 17, 2022 · This guide describes transactional memory, which allows code to be executed atomically without the need to always implement ...
  46. [46]
    [PDF] A Framework for Consistency Models in Distributed Systems - arXiv
    Nov 25, 2024 · Consistency models define possible outcomes when concurrent processes use a shared abstraction. Classic models explain the local history at ...
  47. [47]
    PADS: A Policy Architecture for Distributed Storage Systems - USENIX
    IceCube [13] and actions/constraints [28] provide frameworks for specifying general consistency constraints and scheduling reconciliation to minimize conflicts.
  48. [48]
    [PDF] CONSISTENCY MODELS IN DISTRIBUTED SHARED MEMORY ...
    STRICT CONSISTENCY. The most stringent consistency model is called strict consistency. It is defined by the following condition: “Any read to a memory ...
  49. [49]
    Consistency level choices - Azure Cosmos DB | Microsoft Learn
    Sep 3, 2025 · Azure Cosmos DB has five consistency levels to help balance eventual consistency, availability, and latency trade-offs.Guarantees Associated With... · Bounded Staleness... · Session Consistency<|control11|><|separator|>
  50. [50]
    Eventually consistent | Communications of the ACM
    Eventually consistent. Author: Werner Vogels. Werner Vogels. Amazon.com. View ... https://dl.acm.org/doi/10.1145/3764860.3768331. Nejati Sharif Aldin HPour ...
  51. [51]
    [PDF] Cassandra - A Decentralized Structured Storage System
    Sep 18, 2009 · ABSTRACT. Cassandra is a distributed storage system for managing very large amounts of structured data spread out across many.
  52. [52]
    [1902.03305] Consistency models in distributed systems: A survey ...
    Feb 8, 2019 · This research proposes two different categories of consistency models. Initially, consistency models are categorized into three groups of data-centric, client- ...
  53. [53]
    A Client-Centric Consistency Model for Distributed Data Stores ...
    CC consistency encompasses four distinct types of consistency assurances: 1) monotonic read, 2) monotonic write, 3) read your writes, and 4) write follow reads.
  54. [54]
    (PDF) Data Management and Consistency Models in Microservice ...
    May 30, 2025 · It provides an in-depth review of contemporary consistency models such as strong consistency, eventual consistency, causal consistency, and ...
  55. [55]
    Total order broadcast and multicast algorithms: Taxonomy and survey
    Total order broadcast and multicast (also called atomic broadcast/multicast) present an important problem in distributed systems, especially with respect to ...
  56. [56]
    [PDF] Paxos Made Simple - Leslie Lamport
    Nov 1, 2001 · Abstract. The Paxos algorithm, when presented in plain English, is very simple. Page 3. Contents. 1 Introduction. 1. 2 The Consensus Algorithm.
  57. [57]
    [PDF] A New Solution of Dijkstra's Concurrent Programming Problem
    The algorithm is quite simple. It is based upon one commonly used in bakeries, in which a customer receives a number upon entering the store. The holder of the ...
  58. [58]
    [PDF] Deconstructing the Bakery to Build a Distributed State Machine
    The bakery algorithm assumes processes are named by numbers from. 1 through N. Figure 1 contains the code for process number i, almost exactly as it appeared in ...
  59. [59]
    [PDF] Viewstamped Replication: A New Primary Copy Method to
    This paper presents a new replication algorithm that has desirable performance properties. This research was supported in part by the Advanced Research.
  60. [60]
    [PDF] Chapter 8: The Primary{Backup Approach - CS@Cornell
    In this chapter, we discuss some fundamental costs that arise in connection with building fault-tolerant services using the primary{backup approach. Here are ...
  61. [61]
    [PDF] In Search of an Understandable Consensus Algorithm
    May 20, 2014 · Diego Ongaro and John Ousterhout. Stanford University. Abstract. Raft is a consensus algorithm for managing a replicated log. It produces a ...
  62. [62]
    [PDF] The Google File System
    ABSTRACT. We have designed and implemented the Google File Sys- tem, a scalable distributed file system for large distributed data-intensive applications.
  63. [63]
    [PDF] Understanding Replication in Databases and Distributed Systems
    Eager Primary Copy Replication. In an eager primary copy approach, an update operation is first performed at a primary master copy and then propa- gated from ...
  64. [64]
    [PDF] Chain Replication for Supporting High Throughput and Availability
    Chain replication is a new approach to coordinating clusters of fail-stop storage servers. The approach is intended for supporting large-scale storage ...