Fact-checked by Grok 2 weeks ago

Optimistic concurrency control

Optimistic concurrency control (OCC) is a non-locking mechanism in database systems that allows to execute without acquiring locks on data items, instead relying on validation at commit time to detect and resolve conflicts, assuming that such conflicts are rare. This approach, first proposed by and John T. Robinson in their 1981 paper, divides execution into three phases: a read phase where data is accessed and modified in private workspaces, a validation phase where is checked against concurrent , and a write phase where changes are applied if validation succeeds, otherwise the is aborted and restarted. OCC offers several advantages over traditional pessimistic locking methods, including reduced overhead from lock management, elimination of deadlocks, and higher throughput in low-contention environments where most transactions commit successfully without interference. However, it can suffer from wasted computational effort in high-contention scenarios, as transactions may progress far before aborting due to detected conflicts, potentially leading to lower performance compared to locking in such cases. Validation in OCC typically employs either forward validation (checking before commit) or backward validation (checking after tentative writes), ensuring and while minimizing blocking. In modern database systems, OCC is widely applied in environments prioritizing scalability and concurrency, such as in-memory databases and distributed systems with infrequent updates. For instance, employs OCC to handle concurrent transactions without locks, checking for conflicts only at commit to support high-throughput workloads. Similarly, uses OCC to prevent lost updates in multi-item transactions across distributed partitions, enhancing consistency in scenarios. Recent advancements, including hybrid techniques that combine OCC with selective locking for high-conflict items, further optimize its use in heterogeneous workloads.

Fundamentals

Definition and Core Principles

Optimistic concurrency control (OCC) is a concurrency control method in transactional systems that assumes conflicts between concurrent transactions are infrequent, permitting transactions to proceed without acquiring locks on shared data items and instead performing validation checks only at commit time to detect and resolve any inconsistencies. This approach contrasts with locking-based mechanisms by prioritizing execution efficiency under low-contention scenarios, where the probability of transaction aborts due to conflicts remains low. At its core, OCC operates through three logical phases for each : a read phase where data is accessed and modifications are made to private local copies, a validation phase that checks for with other transactions, and a write phase that commits changes to the shared database if validation succeeds. To facilitate conflict detection, OCC employs versioning or timestamping mechanisms, such as assigning a unique transaction number from a global counter at the end of the read phase, which helps track dependencies and order transactions chronologically. If a is identified during validation, the transaction is aborted and restarted rather than blocked, thereby avoiding prolonged waits and minimizing . A key concept in OCC is the assurance of —the property that the outcome of concurrent execution is equivalent to some serial execution—achieved exclusively through the validation process without the use of shared locks during the read or write phases, which reduces overhead and enhances concurrency in low- environments. Mathematically, conflict detection relies on analyzing read-write , where a arises if one writes to a item that another has read or intends to write, specifically by verifying that a 's write set does not intersect with the read or write sets of preceding concurrent transactions in a way that violates serial . This check ensures that the validated maintains the of the database as if transactions executed sequentially.

Historical Development

Optimistic concurrency control (OCC) originated in the late as a response to the limitations of locking-based mechanisms in emerging high-concurrency database environments. and John T. Robinson formally introduced the concept in their 1981 paper "On Optimistic Methods for ," published in the ACM Transactions on Database Systems and initially presented at the 1979 International Conference on Very Large Data Bases (VLDB). The paper proposed two families of non-locking protocols that allow transactions to proceed without synchronization during execution, relying instead on validation at commit time to detect conflicts, thereby avoiding the overhead of locks in scenarios with infrequent data contention. This innovation was motivated by the inefficiencies of pessimistic approaches like , which were dominant in systems of the era and could lead to blocking, deadlocks, and reduced throughput in multiprogrammed environments. Kung and Robinson's work emphasized that OCC could achieve higher performance by assuming low conflict rates, with rollbacks serving as a lightweight recovery mechanism only when necessary. In the , OCC transitioned from theory to practical exploration, influencing prototype database systems and research implementations as an alternative to locking in centralized and distributed settings. By the , extensions targeted specialized domains, particularly systems (RTDBS), where timing deadlines added complexity to . Seminal contributions included optimistic protocols that incorporated priority-based validation to minimize rollbacks and meet deadlines, such as the dynamic OCC algorithm proposed by Haritsa, Carey, and Livny in 1990. These adaptations addressed the need for predictability in firm environments, where tardy transactions could be discarded to prioritize timely ones. The 2000s marked a surge in modern variants, with snapshot isolation emerging as a widely adopted enhancement providing read consistency via multi-version data while retaining OCC's optimistic core. First conceptualized as an ANSI SQL extension in 1995, snapshot isolation was implemented in production systems like (early 2000s) and , enabling scalable concurrency in enterprise databases by avoiding read-write blocking. Influential in the further refined OCC through validation optimizations aimed at reducing frequency, particularly in distributed contexts. For instance, Franaszek and Robinson's 1985 analysis of concurrency limitations informed subsequent protocols, while works like the distributed OCC scheme by Dan et al. introduced techniques for high-performance with minimized aborts via efficient conflict detection.

Comparison to Pessimistic Approaches

Pessimistic mechanisms prevent data conflicts by requiring transactions to acquire locks on data items before accessing them, thereby blocking concurrent operations until the locks are released. This approach assumes conflicts are likely and aims to avoid them proactively through protocols. A foundational example is the (2PL) protocol, introduced by Eswaran et al. in 1976, which divides locking into a growing phase where locks are acquired and a shrinking phase where they are released, ensuring while minimizing inconsistencies. In contrast to optimistic concurrency control (OCC), which defers conflict detection until validation at commit time and avoids locks during execution, pessimistic methods like 2PL enforce restrictions upfront to guarantee progress without aborts. OCC thus achieves higher throughput in low-conflict environments by permitting greater parallelism, but it risks restarts if conflicts arise late; pessimistic approaches, while ensuring no wasted execution on doomed transactions, introduce blocking that limits concurrency and can lead to deadlocks requiring detection and resolution mechanisms. These differences stem from OCC's reliance on post-execution validation versus pessimistic locking's pre-access prevention. Performance trade-offs between the two highlight workload dependencies: OCC excels in read-heavy scenarios, where the absence of read locks enables near-unlimited concurrency and , as demonstrated in evaluations on multi-core systems achieving superior throughput for read-dominated benchmarks like YCSB with 100% reads. Pessimistic methods, however, perform better in write-heavy environments with frequent conflicts, as early locking avoids the computational waste of aborts in OCC, though they suffer from lock contention and reduced parallelism under high load. Hybrid approaches have emerged to mitigate these trade-offs by selectively combining optimistic execution with pessimistic safeguards, such as applying locks only on high-contention data items to reduce aborts without universal blocking.

Operational Mechanisms

Phases of Execution

Optimistic concurrency control divides execution into three distinct phases: read, validation, and write. This phased approach allows s to proceed without acquiring locks during data access, deferring until commit time. The mechanism assumes low contention environments where conflicts are rare, enabling higher throughput by avoiding premature blocking. In the read phase, a accesses items from the database solely for reading, without modifying the shared . All read operations fetch the current values, which are recorded in a read set to track accessed items. Any intended modifications are performed on local private copies of the data, building a write set of changes that remain isolated from other transactions. No locks are acquired during this phase, permitting concurrent reads and local writes without interference. This design promotes parallelism, as transactions can execute their logic freely until they attempt to commit. The validation phase occurs when the seeks to commit, after completing its read operations. Here, the system assigns a unique transaction number to establish a serial order among concurrent . It then checks for by examining the read and write sets against the current database state, typically using version numbers or timestamps on data items, or by intersecting with sets of recently committed . arise if another has modified an item in the read set since it was read (read-write ), or if write-write overlaps occur on shared items with prior . Validation succeeds only if these checks confirm no violations of in the assumed serial order. If are detected, the is and may be restarted. Upon successful validation, the enters the write phase, where the local copies from the write set are atomically applied to the global database, updating the shared state. This phase ensures that committed changes are consistent with the validated serial order. The overall flow guarantees by simulating execution in the order of transaction numbers, where for a transaction T_i, validation confirms that for all prior T_j (with j < i) in the serial order, there are no write-write or read-write conflicts on shared items. This prevents anomalies such as lost updates or non-repeatable reads.

Validation and Certification Techniques

In optimistic concurrency control (OCC), the validation phase employs algorithms to detect conflicts and ensure serializability by checking for read-write and write-write dependencies among transactions. The foundational backward validation technique, introduced by Kung and Robinson, verifies whether a committing transaction T has read data modified by any transaction that committed after T began its read phase, or plans to write to data modified by such transactions. This is achieved by maintaining read sets (items read by T) and write sets (items written by T) for each transaction, and using transaction numbers (tn) assigned sequentially at the start of execution to order transactions. During validation, the system scans committed transactions from T's starting tn + 1 to the current tn, checking for intersections between their write sets and T's read or write set. If an intersection exists—indicating a read-write or write-write conflict—the validation fails, and T is aborted and restarted with a new tn. The pseudocode for this serial validation process illustrates the backward scan:
tend = (
  finish tn := tnc;
  valid := true;
  for t from start tn + 1 to finish tn do
    if (write set of transaction with transaction number t intersects read set or write set) then valid := false;
  if valid then (
    (write phase);
    tnc := tnc + 1;
    tn := tnc
  );
  if valid then (cleanup) else (backup)
)
This approach ensures opacity by preventing T from committing if it would violate the serialization order, as formalized by the condition: if there exists an item x such that T reads x after transaction T_j writes x (where T_j committed during T's execution), or T writes x after T_j writes x without proper ordering, then abort T. By delaying writes until after validation and reading only committed data, backward validation inherently avoids dirty reads and thus prevents cascading aborts, where the failure of one transaction would force the rollback of dependent transactions. Forward validation extends OCC by checking the committing 's read and write sets against those of concurrently active (uncommitted) to anticipate future and ensure forward serializability. Unlike backward validation, which only aborts the committing upon detecting past , forward validation allows more flexible resolution, such as aborting lower-priority active if they intersect with the committer's sets. This is particularly useful in environments with varying priorities, as it can reduce overall aborts by proactively resolving dependencies. detection relies on immediate publication of sets into a global structure, using efficient intersection methods like hashing to compare sets without retaining historical committed data. Certification techniques in OCC often integrate commit timestamps to order transactions and certify serializability without full set intersections. Transactions are assigned a commit timestamp (ct) lazily during validation, ensuring it respects the timestamps of accessed data items. In multi-version concurrency control (MVCC) variants of OCC, each data version carries write (wts) and read (rts) timestamps defining its validity range [wts, rts]. Certification succeeds if a ct exists such that: \exists \, ct \colon \left( \forall v \in \{\text{versions read by } T\}, \, v.wts \leq ct \leq v.rts \right) \land \left( \forall v \in \{\text{versions written by } T\}, \, v.rts < ct \right) This allows readers to access consistent snapshots without blocking writers, enhancing throughput in read-heavy workloads. Optimizations for validation focus on reducing computational overhead, particularly in the backward scan. Timestamp ordering, as in the original Kung-Robinson model, assigns monotonically increasing numbers to transactions, enabling efficient range-limited scans of only relevant committed transactions rather than all history. In distributed settings, commit logs can be scanned backward from the current timestamp to identify recent writers, minimizing I/O by indexing logs with timestamps or hashing sets for quick intersections. These techniques lower validation cost from O(n) to near-constant time in low-contention scenarios, where conflicts are rare.

Advantages and Limitations

Key Benefits

Optimistic concurrency control (OCC) excels in environments with low data contention, delivering high throughput by avoiding the overhead associated with acquiring and releasing locks during execution. In such scenarios, transactions can proceed with read and write operations without interruption, enabling greater concurrency, particularly in (OLTP) systems dominated by read-heavy workloads. This approach is particularly advantageous for query-intensive applications, where the absence of locking minimizes and allows multiple transactions to overlap efficiently. By deferring conflict detection until the validation phase, OCC eliminates blocking and prevents s that commonly arise in pessimistic locking schemes, thereby improving overall response times and . Transactions execute uninterrupted, only aborting if a is detected at commit time, which reduces in low-conflict settings and enhances system responsiveness without the need for complex deadlock resolution mechanisms. OCC offers superior in distributed systems, as it minimizes the need for inter-node coordination and during the read and write phases, facilitating better across multiple processors or nodes. Benchmarks, such as modified TPC-C workloads, demonstrate that OCC can achieve up to 2× higher throughput compared to locking-based methods in read-dominant scenarios, with even greater gains—up to 10× or more—in systems optimized for low contention. This makes OCC well-suited for large-scale, geo-distributed environments where coordination overhead would otherwise limit . The implementation of OCC is relatively straightforward, relying on versioning or timestamps to track changes rather than intricate locking protocols, which simplifies reasoning about concurrent code and reduces development complexity. This versioning-based mechanism allows for easier integration into non-blocking architectures, promoting maintainability in systems where conflict rates are predictably low.

Potential Drawbacks

In conflict-prone scenarios, such as those involving high contention or write-heavy workloads, optimistic concurrency control (OCC) can lead to elevated abort rates during the validation phase, where are frequently restarted upon detecting conflicts. For instance, in workloads with one write per under high contention, abort rates can exceed 80%, rising to as high as 98% with multiple writes, causing substantial waste of CPU cycles as partially executed must be discarded and retried. These frequent aborts not only degrade throughput—dropping it to as low as 0.28 million in TPC-W-like benchmarks with 32 threads—but also amplify resource inefficiency in environments where conflicts are not rare, as assumed by the optimistic model. OCC also introduces liveliness issues, including the potential for transaction starvation, where persistent conflicts prevent certain transactions from progressing despite repeated attempts. Unlike locking-based methods that provide guaranteed progress through mechanisms like detection, OCC lacks inherent safeguards against indefinite delays, though solutions such as detecting and prioritizing "" transactions by restarting them within a protected have been proposed to mitigate this. This absence of progress guarantees can exacerbate performance degradation in prolonged contention, leading to unpredictable system behavior without additional intervention. The validation process in OCC incurs notable overhead, as timestamp management requires centralized allocation and updates, limiting scalability to around 8 million timestamps per second even on 1024-core systems, while read-set tracking demands storing for each accessed item, adding memory costs of several bytes per alongside computational expenses for conflict checks at commit time. These elements contribute to increased during validation, particularly under concurrency, where comparing read sets against concurrent writes consumes additional CPU resources. In distributed settings, OCC faces amplified challenges due to network delays, which extend validation times and elevate the cost of aborts by necessitating cross-site communication for , potentially leading to non-serializable executions if timings misalign. Partial failures further complicate , as uncoordinated local validations across sites can produce inconsistent graphs, such as cyclic precedences (e.g., T1 precedes T2 at one site and vice versa at another), without robust global coordination mechanisms.

Applications and Implementations

Use in Database Management Systems

Optimistic concurrency control (OCC) has been integrated into several database management systems (DBMS) to enhance efficiency, particularly in environments with low contention. Early research and prototypes in the 1980s at explored OCC mechanisms, with performance analyses demonstrating its potential in systems handling large memory buffers. A notable commercial adoption occurred in with the introduction of snapshot isolation in version 2005, which relies on OCC to allow transactions to read a consistent snapshot of the database while avoiding locks on reads, thereby reducing blocking and improving concurrency. In this implementation, updates use row versioning to detect conflicts during the commit phase, aborting transactions only if changes have occurred since the snapshot was taken. Modern relational DBMS continue to leverage OCC variants for higher levels. PostgreSQL's Serializable Snapshot (SSI), introduced in version 9.1, employs OCC-like validation to achieve full by tracking read-write and write-write conflicts during the validation phase, extending multi-version (MVCC) without requiring traditional locking for all operations. Similarly, supports OCC through optimistic locking mechanisms, particularly for document-centric applications and certain update operations, where version checks prevent lost updates in concurrent scenarios. In terms of , OCC in these DBMS often utilizes row versioning within MVCC frameworks, where each modified row receives a or identifier upon , enabling efficient detection at commit time by comparing versions against the transaction's read set. To optimize validation, systems integrate OCC with , such as using index scans to identify potential conflicts involving predicates from the transaction's reads, minimizing full table scans and supporting scalable performance under moderate workloads. More recently, as of November 2025, Fabric Data Warehouse employs optimistic concurrency control through Isolation as its exclusive model. Transactions read a consistent snapshot from the start and detect write-write conflicts only at commit time, avoiding locks to ensure high read concurrency and data consistency while supporting retry logic for aborted transactions. In contexts, applies OCC principles in its lightweight transactions (LWTs), which use a compare-and-set () model based on the consensus protocol to implement conditional updates atomically across replicas, ensuring linearizable consistency with low overhead in distributed settings. Performance evaluations show LWTs achieving high throughput for conditional operations, though with increased latency compared to non-conditional writes due to the validation round-trip.

Adoption in Web and Distributed Environments

Optimistic concurrency control has been widely adopted in web applications, particularly through HTTP mechanisms that enable conditional requests to manage concurrent modifications without locking resources. In RESTful APIs, Entity Tags (ETags) serve as version identifiers for resources, allowing clients to perform updates only if the resource has not changed since it was last retrieved. For instance, a client includes an ETag in the If-Match header of a PUT or DELETE request; if the server's current ETag matches, the operation proceeds, otherwise, a 412 Precondition Failed response is returned, prompting a retry or conflict resolution. This approach is integral to microservices architectures, where stateless services handle concurrent updates across distributed components, reducing overhead compared to pessimistic locking. In distributed systems, optimistic concurrency control facilitates models by permitting concurrent writes with validation at commit time, often using ing or clocks to detect conflicts. Amazon's DynamoDB implements this via optimistic locking with a attribute: clients read an item's , perform local computations, and issue conditional writes that succeed only if the version remains unchanged, incrementing it on success while throwing an exception for mismatches that requires retry. Similarly, employs vector clocks to track causal relationships among replicas, enabling optimistic updates where conflicts are resolved semantically during reads rather than blocking writes. These techniques ensure in partitioned networks by allowing writes to proceed locally and deferring reconciliation. Representative examples illustrate OCC principles beyond traditional storage. In version control systems like , concurrent branch development proceeds optimistically, with merge conflicts detected and manually resolved only when integrating changes, mirroring the read-modify-validate cycle. Real-time collaboration tools, such as , leverage (OT)—an optimistic concurrency framework that transforms concurrent edits into a consistent state without aborts, preserving user intentions through mathematical operations on edit sequences. Adapting OCC to distributed environments introduces challenges, particularly with network partitions, where stale reads can lead to validation failures and increased retry rates. Systems mitigate this through in retries and context-based conditional updates that incorporate timestamps or clocks to filter outdated versions. In the 2020s, OCC has gained traction in paradigms, such as integrations, where event-driven functions use conditional versioning for safe concurrent state updates in scalable, stateless workflows.

References

  1. [1]
    [PDF] On Optimistic Methods for Concurrency Control - Computer Science
    In this paper, two families of nonlocking concurrency controls are presented. The methods used are “optimistic” in the sense that they rely mainly on ...Missing: history original
  2. [2]
    [PDF] Revisiting optimistic and pessimistic concurrency control - HPE Labs
    May 26, 2016 · Abstract: Optimistic concurrency control relies on end-of-transaction validation rather than lock acquisition prior to data accesses.
  3. [3]
    Concurrency control in Amazon Aurora DSQL | AWS Database Blog
    Dec 4, 2024 · Aurora DSQL uses optimistic concurrency control (OCC), where transactions run without locks, and check for conflicts at commit time.Concurrency Control In... · Example 1: Data Conflicts In... · Example 2: Select For Update...
  4. [4]
    Transactions and optimistic concurrency control - Microsoft Learn
    Jul 15, 2025 · Optimistic concurrency control (OCC) allows you to prevent lost updates and deletes. Concurrent, conflicting operations are subjected to the ...
  5. [5]
    On optimistic methods for concurrency control - ACM Digital Library
    On optimistic methods for concurrency control. article. Free access. Share on. On optimistic methods for concurrency control. Editor: David K. Hsiao. David K ...
  6. [6]
    [PDF] On Optimistic Methods for Concurrency Control.
    Kung, John Robinson. ACM Transactions on Database Systems. (TODS), vol 6, no 2, June 1981. Birth of Optimistic Methods. • Lovely, complex, very concurrent ...
  7. [7]
    Optimistic concurrency control protocol for real-time databases
    In recent years, the use of optimistic schemes for concurrency control in real-time database systems (RTDBS) has received more and more attention.
  8. [8]
    Snapshot Isolation in SQL Server - ADO.NET - Microsoft Learn
    Sep 15, 2021 · A snapshot transaction always uses optimistic concurrency control, withholding any locks that would prevent other transactions from updating ...Missing: 2000s | Show results with:2000s
  9. [9]
    Distributed optimistic concurrency control for high performance ...
    Mar 7, 1990 · A novel optimistic concurrency control (OCC) protocol for distributed high-performance transaction systems is presented.
  10. [10]
    [PDF] The Notions of Consistency and Predicate Locks in a Database ...
    That is, unless all transactions are two- phase, it is possible to construct a legal but incon- sistent schedule. 627. Communications. November 1976 of. Volume ...
  11. [11]
    [PDF] The notions of consistency and predicate locks in a database ...
    The notions of consistency and predicate locks in a database system · K. Eswaran, J. Gray, +1 author. I. Traiger · Published in CACM 1 November 1976 · Computer ...
  12. [12]
    [PDF] An Evaluation of Concurrency Control with One Thousand Cores
    Optimistic Concurrency Control (OCC): The DBMS tracks the read/write sets of each transaction and stores all of their write operations in their private ...
  13. [13]
    [PDF] Mostly-Optimistic Concurrency Control for Highly Contended ...
    Jul 19, 2016 · Yet, MOCC has extremely low overhead, performing orders of magnitude faster than 2PL and as fast as the state-of-the-art OCC even in its best.<|separator|>
  14. [14]
    [PDF] TicToc: Time Traveling Optimistic Concurrency Control
    In this paper we present TicToc, a new optimistic concurrency control algorithm that avoids the scalability and concurrency bot- tlenecks of prior T/O ...
  15. [15]
    [PDF] Lecture #9: Optimistic Methods for Concurrency Control
    In an optimistic concurrency control protocol, we aim to remove locks as the primary control mechanism to avoid these overheads. The primary distinction between ...Missing: Franaszczyk | Show results with:Franaszczyk
  16. [16]
    Low overhead concurrency control for partitioned main memory ...
    On a modified TPC-C benchmark, speculative concurrency control can improve throughput relative to the other schemes by up to a factor of two.
  17. [17]
    Mostly-optimistic concurrency control for highly contended dynamic ...
    Optimistic CC (OCC) scales the best for workloads with few conflicts, but suffers from clobbered reads for high conflict workloads. Although pessimistic locking ...
  18. [18]
    [PDF] BCC: Reducing False Aborts in Optimistic Concurrency Control with ...
    In this paper we have presented the Balanced Concurrency Con- trol (BCC) mechanism for in-memory databases. Unlike OCC that aborts a transaction based on ...
  19. [19]
    [PDF] An Evaluation of Concurrency Control with One Thousand Cores
    Optimistic Concurrency Control (OCC): The DBMS tracks the read/write sets of each transaction and stores all of their write operations in their private ...
  20. [20]
    [PDF] Speedy Transactions in Multicore In-Memory Databases
    To understand the tradeoffs, we built and evaluated a partitioned variant of. Silo. Partitioning performs better for some workloads, but a shared-memory design ...
  21. [21]
    Problems of optimistic concurrency control in distributed database ...
    In [SCH81] some aspects of optimistic concurrency control (CC) in distributed database systems have been discussed, some important problems have, however, ...Missing: settings challenges
  22. [22]
    Performance analysis of optimistic concurrency control schemes for ...
    May 1, 1989 · The authors examine the impact of a large buffer memory on the comparative performance of different Optimistic Concurrency Control (OCC) ...
  23. [23]
    Analysis of some optimistic concurrency control schemes based on ...
    Aug 1, 1985 · Optimistic Concurrency Control-OCC schemes based on certification are analyzed in this paper. We allow two types of data access schemes ...Missing: Starburst 1980s
  24. [24]
    Serializable snapshot isolation in PostgreSQL - ACM Digital Library
    This paper describes our experience implementing PostgreSQL's new serializable isolation level. It is based on the recently-developed Serializable Snapshot ...Missing: OCC | Show results with:OCC
  25. [25]
    Using Optimistic Concurrency Control With Duality Views
    Optimistic concurrency control at the document level uses embedded ETAG values in field etag , which is in the object that is the value of field _metadata .Missing: isolation | Show results with:isolation
  26. [26]
    [PDF] Serializable Snapshot Isolation in PostgreSQL
    ABSTRACT. This paper describes our experience implementing PostgreSQL's new serializable isolation level. It is based on the recently-developed.
  27. [27]
    Cassandra Lightweight Transactions (LWT) Definition - ScyllaDB
    Cassandra lightweight transactions (LWTs) are a feature in Apache Cassandra that provide optimistic concurrency control using a compare-and-set (CAS) mechanism.Missing: OCC | Show results with:OCC
  28. [28]
    RFC 7232: Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests
    ### Summary: ETags for Optimistic Concurrency Control in HTTP (RFC 7232)
  29. [29]
    Perform conditional operations using the Web API - Microsoft Learn
    Jul 11, 2024 · You can use optimistic concurrency to detect whether a record was modified since it was last retrieved. If the record you intend to update or ...<|separator|>
  30. [30]
    DynamoDB and optimistic locking with version number
    Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in Amazon DynamoDB.Missing: Riak | Show results with:Riak
  31. [31]
    None
    ### Summary of Concurrency Control and Conditional Updates in Dynamo (Optimistic Approaches)
  32. [32]
    [PDF] FaRM: Fast Remote Memory - USENIX
    Apr 2, 2014 · Transactions use optimistic concurrency control with an optimized two-phase commit protocol that takes advantage of RDMA. FaRM achieves avail-.