Fact-checked by Grok 2 weeks ago

Two-phase commit protocol

The two-phase commit (2PC) protocol is a distributed algorithm that coordinates multiple nodes in a to ensure atomicity, meaning all participants either commit their local changes or abort them entirely, preventing partial updates across distributed resources. Introduced by Jim Gray in his 1978 paper "Notes on Data Base Operating Systems," the protocol formalizes a mechanism for reliable commitment in environments like distributed , where a spans multiple autonomous sites. It operates under a coordinator-participant model, where one acts as the coordinator and the others as participants (or cohorts), each managing local resources such as database locks and logs. The protocol's core strength lies in its use of durable logging to support recovery: participants force log records to stable storage during preparation, enabling idempotent operations for redo or undo in case of failures. In operation, 2PC proceeds in two distinct . During the first (prepare or voting ), the sends a "prepare" to all participants, prompting each to lock resources, perform local work, and write prepare log entries; participants then respond with a "yes" vote if ready to commit or "no" if unable to proceed. If the receives unanimous "yes" votes, it enters the second (commit ) by the decision and a "commit" , after which participants acknowledge completion and release locks; any "no" vote or timeout triggers an "abort" instead. This process typically involves up to 4n messages for n participants, ensuring while tolerating certain failures through timeouts and protocols. Widely adopted in enterprise systems, 2PC underpins standards like XA for transaction managers in databases (e.g., IBM IMS, Oracle) and middleware, facilitating applications in banking, e-commerce, and cloud services where data consistency across sites is critical. However, it is a blocking protocol: if the coordinator fails after the prepare phase but before broadcasting the decision, participants may remain locked until recovery, potentially causing delays or deadlocks in high-availability environments. Optimizations such as presumed commit or abort variants, along with three-phase commit extensions, address these issues by reducing logging overhead and blocking risks, though the basic 2PC remains foundational for its simplicity and guarantees.

Prerequisites and Assumptions

System Model

The two-phase commit protocol operates within a system comprising multiple nodes, each managing local resources such as databases or files, that must collectively execute a atomically to maintain consistency across the network. These nodes communicate via , and the system assumes processes run at arbitrary speeds but progress eventually, with faults being rare and detectable. The protocol ensures that updates to shared resources are either all applied or all discarded, preventing partial states that could lead to inconsistencies in distributed environments. In this model, the system designates distinct roles: a , typically a central or that initiates and orchestrates the decision, and participants, which are managers at each responsible for preparing local effects and executing the final decision. The drives the by soliciting votes from participants on whether they can commit their portion of the . Participants, in turn, lock resources during preparation and transition states only upon receiving the 's directive, relying on stable storage to log decisions for recovery. The underlying model emphasizes atomicity as a core property, requiring that the entire commits only if all participants agree to do so, or aborts otherwise, thereby ensuring , , and across nodes despite potential failures. This all-or-nothing guarantee extends the single-node to distributed settings, where partial failures could otherwise violate invariants. For instance, in a banking performing an between two accounts on separate databases, the protocol coordinates the debit from one account and to the other, ensuring the completes fully or reverts entirely to avoid lost or duplicated funds.

Reliability Assumptions

The two-phase commit protocol relies on a set of reliability assumptions to guarantee the atomicity of distributed transactions across multiple nodes. It operates in an asynchronous communication model, where message delivery times are unbounded, but assumes reliable and ordered transmission without losses or undetected duplicates, often enforced through sequence numbering and acknowledgment protocols in communication sessions. This model detects and recovers from message omissions via and retries, ensuring that all nodes eventually receive consistent decisions despite delays. A core requirement is the availability of stable storage at each participant , where transaction decisions and logs are written durably before any commit messages are sent. This non-volatile storage survives node crashes independently of , allowing nodes to persist critical state information such as prepare votes or final outcomes. Without stable storage, the protocol could not ensure , as transient failures might lead to inconsistent states across the system. The protocol assumes a crash-recovery fault model, where nodes may fail by stopping (fail-stop semantics) and later recover, but they do not exhibit Byzantine behavior such as sending conflicting or malicious messages. Upon recovery, nodes replay logs to redo committed actions or undo uncommitted ones, restoring without violating the global outcome. This model excludes partitions that prevent message delivery, relying instead on recoverable sessions to maintain coordination. These assumptions were formalized in early distributed database research, notably by Jim Gray in 1978, who introduced the protocol for management in systems like System R, emphasizing honest nodes and robust to handle crashes in multi-node environments.

Core Protocol Mechanics

Voting Phase

In the voting phase of the two-phase commit protocol, the coordinator initiates the process by sending a "prepare" message to all participants after completing its local validation and ensuring that the transaction can proceed on its end. This message prompts each participant to assess whether it can commit the locally. Upon receiving the prepare message, each participant performs necessary local operations, including acquiring locks on relevant resources, verifying constraints such as and , and writing a prepare record to its stable log to indicate readiness. If these checks succeed, the participant transitions to a prepared , logs the necessary redo and information for , and responds to the with a "yes" vote signifying it is ready to commit; otherwise, it responds with a "no" vote, indicating an abort is required due to failure in local execution. This phase exhibits a blocking characteristic, as participants retain locks on resources from the moment they receive the prepare message and enter the prepared state until they receive the final decision from the , potentially stalling concurrent transactions that require those resources. To mitigate indefinite waits, participants implement timeout mechanisms; if no decision message arrives from the within a predefined period after sending their vote, the participant assumes an abort and rolls back the transaction to release resources. The 's logic in this phase can be outlined as follows:
Coordinator Voting Phase:
1. Send prepare message to all participants.
2. Wait for responses from all participants (with timeout handling).
3. If all responses are "yes": Decide to commit (proceed to decision phase).
4. If any response is "no": Decide to abort (proceed to decision phase).
5. Log the decision in stable storage.

Decision Phase

In the decision phase of the two-phase commit protocol, the aggregates the responses received from all participants during the preceding voting phase. If every participant has indicated readiness to commit by responding affirmatively, the decides to commit the ; otherwise, if any participant has voted to abort, the decides to abort. This binary decision mechanism ensures that the outcome is unanimous across the distributed system. Upon reaching its decision, the records the global outcome—either commit or abort—in its stable storage log to guarantee and support in case of failures. It then broadcasts the corresponding message ("commit" or "abort") to all participants over the network. Participants, upon receiving the message, execute the instructed action: for a commit, they make the transaction's updates permanent by writing them to stable storage and release any held locks or resources; for an abort, they roll back the changes and release resources. To confirm completion, each participant acknowledges the message back to the coordinator only after the outcome in its own stable storage, ensuring the decision is persisted locally before proceeding. This phase enforces the all-or-nothing property of transactions by centralizing the final decision at the and requiring explicit and acknowledgment from all participants. No partial commits are possible, as the blocks until is achieved or failure is detected, thereby maintaining across all involved sites even under partial network partitions or site failures. The retains its of the global decision to resolve any uncertainties during subsequent .

Participant States and Transitions

In the two-phase commit protocol, participants (resource managers) operate according to a that ensures atomicity by coordinating changes across distributed sites. The key for a participant are Active, Prepared, Committed, and , each representing distinct stages of readiness and . The Active is the initial condition, where the participant processes the locally but has not yet received a prepare request from the coordinator; resources may be temporarily held, but the participant retains the ability to unilaterally abort. Upon receiving the prepare message, the participant evaluates local commit feasibility, locks resources if possible, and logs a prepare record; a successful triggers a transition to the Prepared , where the participant votes "yes" and waits for the coordinator's decision, with resources now locked against further changes. The Prepared is semi-committed, as the participant has abdicated unilateral abort rights but requires external coordination to finalize. From the Prepared state, the participant transitions to Committed upon receiving a commit decision, at which point local changes are made durable by writing to stable storage and releasing locks; this state is terminal and irreversible. Alternatively, an abort decision leads to the state, where partial effects are rolled back, locks released, and the is undone durably. The Aborted state can also be entered directly from Active if the participant votes "no" during preparation or encounters a local failure. The (transaction manager) maintains its own simplified state machine, starting in an Idle state before initiating the protocol, transitioning to Inquire (or Preparing) upon sending prepare messages and awaiting votes, then to Wait (or Decided) after collecting responses, and finally to Commit or Abort to broadcast the outcome. State transitions are designed for idempotency, allowing safe retries of messages (e.g., duplicate prepare or commit requests) without altering an already-finalized state, which supports from communication failures. The logical flow of the can be summarized as follows:
From StateTriggerTo StateAction Performed
ActivePrepare message receivedPreparedLock resources, log prepare, vote yes
ActiveLocal failure or no voteAbortedRollback changes, release locks
PreparedCommit message receivedCommittedMake changes durable, release locks
PreparedAbort message receivedAbortedRollback changes, release locks
This table illustrates the deterministic progression, ensuring no cycles or ambiguities in normal execution. For recovery after a crash, a participant scans its durable logs upon restart to determine its state: if no prepare log exists, it was Active and can forget the ; if Prepared but no decision is logged, it contacts the (or uses a protocol) to obtain the final outcome and accordingly; Committed or Aborted states are recovered directly from logs and confirmed idempotently. This log-based preserves the protocol's guarantees, preventing partial commits across sites.

Protocol Execution and Outcomes

Successful Commit Flow

In the successful commit flow of the two-phase commit (2PC) protocol, the process begins when a is initiated across multiple coordinated by a transaction manager (TM). The TM sends a to all participating , prompting each to perform local preparations, such as acquiring necessary locks and writing a prepare log entry to stable storage, indicating readiness to commit if instructed. Each RM responds affirmatively with a "yes" vote (or prepared ) only if it can guarantee the ability to commit, thereby entering the prepared state and forgoing the option for unilateral abort. Upon receiving affirmative votes from all , the TM enters the committed state, writes a commit log entry to its stable storage, and broadcasts a commit decision message to all . Each RM, upon receiving the commit message, applies the changes (e.g., making them visible to other transactions), writes a commit log entry, releases any held locks, and sends an acknowledgment back to the TM. Once all acknowledgments are received, the TM completes the and releases any associated resources, ensuring the entire operation concludes successfully. This flow guarantees atomicity by ensuring that all participating RMs either fully commit their local changes or none do, treating the distributed transaction as a single indivisible unit; changes are only made durable and visible after the global commit decision, preventing partial updates. Durability is achieved through two-phase logging: the initial prepare log records the intent to commit, allowing recovery to the prepared state post-crash, while the subsequent commit log confirms the final decision, enabling RMs to apply changes even if acknowledgments are lost. A representative example occurs in a distributed database system where a banking application transfers funds between accounts on separate nodes. The TM coordinates on each node to prepare debiting one account and crediting another; upon all yes votes, the propagates the changes, atomically updating balances across nodes while ensures against failures. In terms of performance, the successful commit flow incurs a equivalent to twice the network round-trip time in a typical setup—one round-trip for the voting (prepare requests and responses) and another for the decision (commit messages and acknowledgments)—highlighting the protocol's coordination overhead in distributed environments.

Failure and Abort Handling

The two-phase commit (2PC) protocol handles aborts when any participant votes "no" during the voting phase, indicating it cannot commit the transaction due to local constraints such as resource unavailability or constraint violations. In such cases, the coordinator immediately transitions to the decision phase and broadcasts an abort message to all participants, prompting them to rollback their local changes and release any held locks. Coordinator crashes during the voting phase trigger participant timeouts, as participants wait for a decision after sending their prepared votes; upon timeout, if the coordinator crashes after a participant has sent its prepared vote but before sending the decision, the participant detects failure via timeout and enters a recovery state, typically blocking until it can query the recovered coordinator (or logs) to learn the final outcome and either commit or abort accordingly. Similarly, if a participant crashes after voting "yes" but before receiving the decision, it recovers by examining its local log upon restart; if in the prepared state, it queries the coordinator (or a recovery mechanism) to determine the outcome and either commits or aborts accordingly. The coordinator, upon its own recovery, uses its log to respond to such queries, ensuring all participants align on the abort decision if no commit was recorded. These recovery protocols rely on durable logging at both coordinators and participants to record votes, decisions, and states in stable storage before messaging, preventing partial commits and preserving atomicity. Aborts ensure no partial commits occur, as local changes remain tentative until the commit phase and are rolled back using undo logs, thereby maintaining global consistency across the distributed system. For instance, in a where participants cannot reach the after , the resulting timeout leads to a safe abort at each participant only if no commit decision has been made; otherwise, ensures while the resolves. However, without proper timeout mechanisms, 2PC can experience indefinite blocking, potentially leading to distributed deadlocks where participants wait indefinitely for responses from failed components.

Communication Message Flows

The two-phase commit (2PC) protocol relies on a specific set of messages exchanged between the and participants to achieve atomic commitment. The primary message types include: Prepare, sent by the to initiate the voting ; Vote-Yes or Vote-No, responses from participants indicating readiness to commit or intent to abort; Global-Commit or Global-Abort, broadcast by the in the decision to finalize the outcome; and , acknowledgments sent by participants upon receiving the decision to confirm receipt. In the protocol's sequence, the coordinator first sends a Prepare message to all participants, often in a broadcast-like manner to multiple recipients simultaneously. Each participant responds individually with a unicast Vote-Yes or Vote-No message back to the coordinator. Upon collecting all votes, the coordinator issues a Global-Commit (if unanimous yes) or Global-Abort message, again typically broadcast to all participants. Finally, participants reply with Ack messages to the coordinator, completing the flow. This sequence ensures all parties reach consensus without partial commits. The protocol assumes reliable FIFO (first-in, first-out) communication channels between the and participants, guaranteeing that messages arrive in order without loss or duplication under normal conditions. To handle potential message loss due to timeouts or failures, the protocol incorporates retries, where the or participants resend messages upon detecting delays. In the success case with N participants, the total message overhead is typically 3N or 4N, consisting of N prepare messages, N vote responses, N commit messages, and optionally N acknowledgments. Broadcasts are often implemented as N messages. This linear scaling with N highlights the protocol's efficiency for moderate participant counts but can become a in large-scale systems. Modern implementations of 2PC typically use for its built-in reliability to meet the FIFO and loss-free requirements, though some high-performance variants explore with application-level acknowledgments and retries to reduce overhead in low-loss environments.

Limitations and Trade-offs

Blocking and Risks

In the two-phase commit (2PC) protocol, participants that vote affirmatively during the voting phase transition to a prepared state, wherein they retain exclusive locks on all accessed resources to preserve atomicity and . Should the fail subsequent to receiving these votes but prior to the final commit or abort decision, the participants remain blocked in this state, holding the locks indefinitely until or external . This resource blocking diminishes system availability, as concurrent transactions cannot acquire the necessary locks, potentially stalling database operations across the distributed environment. Deadlocks pose an additional risk in 2PC-enabled systems, particularly when multiple transactions involving overlapping resources span different coordinators. For example, consider two transactions, T1 coordinated by C1 and T2 by C2: T1 holds a lock on resource R1 at participant P1 and is waiting for a lock on R2 at P2, while T2 holds a lock on R2 at P2 and is waiting for a lock on R1 at P1. This creates a cyclic wait across sites. If both transactions reach the prepared state after resolving local locks but the global deadlock is detected late, or if the prepared state prolongs the wait due to coordinator delays, the distributed deadlock persists, preventing progress. Such scenarios are exacerbated by the extended lock retention in the prepared phase, amplifying contention in multi-site setups. To address blocking, implementations often incorporate timeouts at participants, prompting an automatic abort after a predefined interval in the absence of a response. However, this introduces the peril of inconsistency; if the had resolved to commit but the message was lost, a timeout-induced abort at a participant would violate atomicity, leaving the partially rolled back. Careful configuration of timeout durations is essential, balancing availability against the risk of such divergent outcomes. The prolonged resource locking inherent to 2PC's prepared state curtails concurrency, as held locks impede parallel transaction execution, thereby reducing overall system throughput in contention-heavy workloads. For instance, extended commit processing can double or triple lock hold times compared to local transactions, leading to increased wait queues and abort rates due to timeouts or s. These blocking and deadlock risks were identified in early commercial deployments of 2PC during the 1980s, highlighting challenges of in distributed, high-availability environments and prompting subsequent optimizations to enhance resilience.

Single Point of Failure Issues

The two-phase commit (2PC) protocol relies on a centralized to collect votes from participants during the voting phase and to broadcast the final commit or abort decision, making the coordinator a critical that can halt progress across the distributed system. If the coordinator crashes after participants have voted "yes" (prepared to commit) but before issuing the decision, all participants remain in the prepared state, blocking indefinitely as they cannot unilaterally commit or abort without risking inconsistency. This vulnerability amplifies the impact of coordinator , as all transaction outcomes depend on this single node, potentially leading to prolonged downtime until recovery. Recovery from a during the voting phase involves restarting the , which reconstructs the transaction state from its durable logs recording the collected votes and then resends the decision to unblock participants. In the decision phase, if the occurs before the commit messages are fully disseminated (pre-commit), recovered participants can query the restarted for the outcome, allowing them to proceed accordingly; however, if the happens after some commit messages but before acknowledgments, a separate procedure may be required where the verifies participant states via logs or inquiries to ensure all have committed. These steps rely on stable storage for decisions prior to broadcasting, but the process introduces and , as participants must wait without timeouts in strict 2PC implementations. The standard, formalized in the 1990s to provide interfaces for processing, standardizes the two-phase commit mechanics between transaction managers (coordinators) and managers (participants) but does not inherently mitigate the coordinator's through features like hot-standby redundancy, leaving such availability enhancements to vendor-specific implementations. This design choice underscores a fundamental trade-off in 2PC: the simplicity of centralized coordination enables straightforward atomicity guarantees but compromises system availability, as even transient coordinator failures can block s involving multiple sites until manual or automated recovery completes.

Implementations and Enhancements

Centralized Coordinator Architecture

In the centralized coordinator architecture of the two-phase commit protocol, a designated coordinator, typically implemented as a transaction manager, orchestrates the entire process among multiple participants that act as resource managers. This setup aligns with standards like the specification, where the transaction manager communicates with resource managers—such as instances—to ensure atomic commitment across distributed resources. The coordinator initiates the protocol by sending prepare requests to all participants, collects their responses, and then issues a global commit or abort decision, thereby centralizing control for simplicity and reliability in enterprise environments. To support and failure , the maintains logs recording each participant's vote (yes or no) and the final decision, writing these to stable storage before notifying participants. Participants, in turn, log their prepared state—indicating readiness to commit—prior to acknowledging the , ensuring that outcomes can be reconstructed even after crashes. This mechanism is integral to the protocol's , preventing inconsistencies during by allowing the to resend decisions based on logged votes. The architecture integrates seamlessly with APIs like the interface, which provides standardized functions for managers to enlist and manage managers in distributed systems, facilitating adoption in heterogeneous setups. One key advantage is its simplicity, as the single-coordinator model requires minimal coordination logic and is straightforward to implement and debug compared to more complex variants. In practice, this architecture is widely employed in systems for handling distributed transactions; for instance, uses it to coordinate two-phase commits across multiple nodes, ensuring data consistency in clustered environments, while supports it through XA transactions for similar multi-resource coordination. The core phases are executed via direct message flows from the coordinator to participants, maintaining the protocol's efficiency in centralized setups.

Presumed Abort Optimization

The presumed abort optimization modifies the standard two-phase commit protocol by assuming that a has unless explicit evidence of a commit exists in the 's log, thereby minimizing the need for persistent of participant votes. In this variant, the avoids individual "yes" votes received during the prepare phase; it only records explicit aborts or, in the case of a full commit, a commit decision along with any necessary participant acknowledgments. This reduces the volume of stable storage operations, as routine aborts can be handled without durable records after notifying participants. During recovery from a , if no log entry for a is found when a participant inquires about the outcome, the presumes an abort and directs the participant to roll back, effectively narrowing the period of outcome uncertainty compared to the baseline protocol. This approach builds on the centralized model by leveraging the infrequency of successful commits in failure-prone systems to defer until necessary. Key benefits include significantly lower storage overhead, as aborted transactions require no long-term log retention, and accelerated abort flows, where the can broadcast abort messages without awaiting full acknowledgments from all participants, streamlining recovery in high-abort scenarios. A drawback arises in commit scenarios: participants that forget their local state or experience delays may resend inquiries, forcing the to retrieve and retransmit commit decisions, which can elevate message traffic beyond the standard protocol's levels.

Presumed Commit Optimization

The presumed commit optimization is a variant of the two-phase commit (2PC) protocol designed to enhance efficiency in scenarios where commits are the common outcome, by reducing and recovery overhead at the . In this approach, the , after receiving votes to commit from all participants in the prepare phase and subsequently sending commit messages, discards its logs once acknowledgments (acks) are received from participants. This optimization assumes that commits will succeed by default, allowing the system to forgo persistent storage of commit decisions after successful completion. The protocol flow modifies the standard 2PC to minimize queries during successful paths. Following the vote-to-commit phase, the sends commit directives to participants and maintains logs only until all acks confirm receipt and execution of the commits. If a participant times out without a response during (e.g., after a and restart), it presumes the committed and proceeds accordingly, querying other participants if needed to confirm the outcome. This reduces the number of messages exchanged in the , as the no longer needs to retain full logs for post-commit verification. Key benefits include faster overall commit times due to decreased logging persistence and fewer recovery interactions in environments with low failure rates, where most transactions succeed without interruption. For instance, in systems with reliable networks and stable nodes, this leads to lower overhead compared to traditional 2PC, as the coordinator's storage requirements drop significantly after acks. However, the optimization introduces drawbacks, particularly in crash-prone settings, where an incomplete set of acks at the time of coordinator failure could result in lost commit information, forcing participants to perform additional queries during recovery. This makes it riskier for transactions that might require durable proof of , potentially increasing uncertainty in high-failure scenarios. Presumed commit is often used complementarily with presumed abort techniques in advanced 2PC implementations to create non-blocking optimizations, balancing efficiency for both commit and abort paths while minimizing logging overall.

Hierarchical and Tree-based Variants

In tree-based variants of the two-phase commit (2PC) protocol, the participating nodes are organized into a where intermediate nodes act as sub-s, enabling efficient coordination for large numbers of participants. The root broadcasts prepare messages to its immediate children, which recursively propagate the vote request down to participants; votes are then aggregated upward through acknowledgments from sub-trees before the root issues a global decision, which propagates downward similarly. This hierarchical limits communication to the tree's depth rather than requiring direct exchanges with all nodes, reducing the communication depth and load to O(log N) while keeping total message complexity at O(N) in a balanced of N nodes. Hierarchical 2PC extends this model by layering multiple levels of sub-s to manage distributed transactions in expansive systems, such as cloud-based databases with thousands of nodes. A top-level interacts only with regional or sub-s, each of which independently runs 2PC within its —collecting votes from local participants and reporting aggregated provisional statuses (commit-ready or abort) to superiors—before the global decision cascades back through the layers. This approach draws from nested transaction frameworks, where sub-transactions maintain provisional outcomes until parent-level resolution, allowing partial independence in decision-making. These variants offer key benefits in scalability, supporting coordination across thousands of nodes by distributing the load and minimizing wide-area network traffic through localized aggregation, and in , as failures in one sub-tree can be isolated with independent protocols without halting the entire system. However, they introduce challenges, including heightened from managing identifiers and status propagation across layers, as well as the risk of cascading failures if a critical sub-coordinator becomes unavailable, potentially delaying global resolution. Tree-based and hierarchical 2PC find application in large-scale distributed systems requiring atomicity over vast participant sets. For instance, Google's Spanner integrates 2PC elements across replicated groups, where group leaders serve as distributed coordinators for multi-shard transactions, enhancing scalability to global datacenters while maintaining through automatic leader . Similarly, employs a coordinator-driven 2PC for transactional producers (introduced in Kafka 0.11, 2017), ensuring exactly-once delivery by committing offsets and markers across multiple partitions in high-throughput streaming environments. With the implementation of KIP-939 in Kafka 4.0 (March 2025), Kafka can now also participate in external 2PC protocols as a resource manager.

References

  1. [1]
    [PDF] Lecture Notes in Computer Science - Jim Gray
    Notes on Data Base Operating Systems. Jim Gray. IBM Research Laboratory. San ... This three-phase commit protocol is an elaboration of the two-phase commit ...
  2. [2]
    Two-Phase Commit - an overview | ScienceDirect Topics
    The two-phase commit protocol ensures that a transaction either commits at all the resource managers that it accessed or aborts at all of them.
  3. [3]
    IMS 15.5 - Overview of two-phase commit protocol - IBM
    Two-phase commit protocol is comprised of a set of actions that ensure a transaction involving multiple databases does not produce unsynchronized updates.
  4. [4]
    [PDF] Tandem TR 88.6 A COMPARISON OF THE BYZANTINE ... - Jim Gray
    TWO-PHASE COMMIT PROBLEM: Given N stable processes each with the state diagram above, find an algorithm which forces ALL the processes to the COMMITTED state or ...Missing: original | Show results with:original
  5. [5]
    [PDF] Jim Gray - The Transaction Concept: Virtues and Limitations
    Jim Gray. Tandem Computers Incorporated. 19333 Vallco Parkway, Cupertino CA 95014 ... At commit, the two-phase commit protocol gets agreement from each ...
  6. [6]
    [PDF] Consensus on Transaction Commit - Leslie Lamport
    The classic Two-Phase. Commit protocol blocks if the coordinator fails. Fault-tolerant consensus algorithms also reach agreement, but do not block whenever any ...Missing: original | Show results with:original
  7. [7]
    Consensus on transaction commit - ACM Digital Library
    The classic Two-Phase Commit protocol blocks if the coordinator fails. Fault-tolerant consensus algorithms also reach agreement, but do not block whenever any ...Missing: assumptions | Show results with:assumptions
  8. [8]
    [cs/0408036] Consensus on Transaction Commit - arXiv
    Aug 14, 2004 · The classic Two-Phase Commit protocol blocks if the coordinator fails. ... From: Jim Gray [view email] [v1] Sat, 14 Aug 2004 21:23:41 UTC ...
  9. [9]
    [PDF] The Transaction Concept: Virtues And Limitations
    At commit, the two-phase commit protocol gets agreement from each participant that the transact ion. ' prepared to commit. The. participaZ abdicates the right ...Missing: original | Show results with:original
  10. [10]
    None
    ### Summary of Participant States and Transitions in Two-Phase Commit Protocol
  11. [11]
    [PDF] Eight Transaction Papers by Jim Gray - arXiv
    Oct 6, 2023 · In this paper, Jim gave a formal characterization of the problem and how two- phase commit satisfies it. Jim was convinced that transactions are ...
  12. [12]
    [PDF] Commit/Termination protocols – 2PC
    □ Two Phase Commit. □ Three Phase Commit. □ Four Phase Commit. □ Linear ... least one participant votes to abort it. ➁ The coordinator commits a ...
  13. [13]
    [PDF] Distributed Transactions
    4 types of messages: prepare, vote y/n, commit/abort, ack. 4 types of log records: prepare*, commit*, abort*, end. Subordinates force-write log records – why?
  14. [14]
    [PDF] Chapter 9 atomicity: all-or-nothing and isolation
    If all goes well, two-phase commit of N worker sites will be accomplished in 3N messages, as shown in figure 9–36: for each worker site a PREPARE message, a ...
  15. [15]
    Two-Phase Commit Performance Considerations1 - Microsoft Learn
    Apr 19, 2022 · There are two additional round-trip message flows to the host for the two-phase commit (2PC), plus the Windows message flows to enlist, and the ...
  16. [16]
    Reducing the Blocking in Two-Phase Commit Protocol Employing ...
    Blocking Reduction in Two-Phase Commit Protocol with Multiple Backup Sites ... The two-phase commit (2PC) protocol (or its variation) is widely employed for ...<|control11|><|separator|>
  17. [17]
    [PDF] TWO PHASE LOCKING - Microsoft
    As we have seen, 2PL causes abortions because of deadlocks. Conservative 2PL avoids deadlocks by requiring each transaction to obtain all of its locks before ...
  18. [18]
    [PDF] ABSTRACT Two-Phase Commit and other distributed ... - CS.HUJI
    Two-Phase Commit and other distributed commit protocols provide a method to commit changes while preserving consistency in a distributed database .
  19. [19]
    [PDF] Two-phase commit optimizations and tradeoffs in the commercial ...
    A widely used commit protocol is the two-phase commit (2PC) protocol. This protocol ensures that all participants commit if and only if all can commit ...Missing: jim | Show results with:jim
  20. [20]
    [PDF] NonStop SQL, A Distributed, High-Performance, High-Availability ...
    Apr 4, 1987 · Abstract: NonStop SQL is an implementation of ANSI SQL on Tandem. Computer Systems. It provides distributed data and distributed execution. It ...
  21. [21]
    [PDF] Consensus on Transaction Commit - s2.SMU
    The classic transaction commit protocol is Two-Phase Commit [Gray 1978], ... Notes on data base operating systems. In Operating Systems: An Advanced ...<|control11|><|separator|>
  22. [22]
    [PDF] X/Open CAE Specification - The Open Group Archive Server
    Appendix B describes how the TX interface maps to the X/Open XA interface. ... The function tx_commit( ) normally returns when the two-phase commit procedure is ...
  23. [23]
    4 Managing Transactions - Oracle Help Center
    In the two-phase commit process for distributed transactions, the transaction manager coordinates all resource managers involved in a transaction. After all ...
  24. [24]
    35 Managing Distributed Transactions - Oracle Help Center
    Oracle Database SQL ... This is a special table that stores information about distributed transactions as they proceed through the two-phase commit phases.
  25. [25]
    [PDF] A New Presumed Commit Optimization for Two Phase Commit
    Two phase commit (ZPC) ia used to coordinate the commitment of trunsactiom in distributed systems. The standard 2PC optimization is the presumed abort.Missing: original | Show results with:original
  26. [26]
    A New Presumed Commit Optimization for Two Phase Commit
    [3] Jim Gray, Andreas Reuter: Transaction Processing: Concepts and Techniques. ... The Presumed-Either Two-Phase Commit Protocol. This paper describes the ...
  27. [27]
  28. [28]
    [PDF] Nested Transactions: An Approach to Reliable Distributed Computing
    Nested transactions have a hierarchical grouping structure: each nested transaction consists of zero or more primitive actions and possibly some nested ...
  29. [29]
    33 Distributed Transactions Concepts - Database - Oracle Help Center
    A session tree is a hierarchical model that describes the relationships among sessions and their roles. Two-Phase Commit Mechanism In a distributed database ...
  30. [30]
    [PDF] Spanner: Google's Globally-Distributed Database
    If a transaction involves more than one Paxos group, those groups' leaders coordinate to perform two- phase commit. One of the participant groups is chosen as.
  31. [31]
    KIP-939: Support Participation in 2PC - Apache Software Foundation
    Jun 2, 2023 · Two phase commit (aka 2PC) protocol is used to implement distributed transactions across multiple participants.2PC Refresher · Kafka as a Participant in 2PC · Explicit “prepare” RPC