Fact-checked by Grok 2 weeks ago

ACID

ACID ( Atomicity, , , ) is a set of four key properties that ensure the reliability and correctness of database transactions by guaranteeing validity and in the face of errors, concurrent operations, and system failures. The concept originated from foundational work in , first systematically described by Gray in his 1981 paper "The Transaction Concept: Virtues and Limitations," where he outlined the essential characteristics of transactions without using the acronym. The ACID acronym itself was coined in 1983 by Andreas Reuter and Theo Härder in their paper "Principles of Transaction-Oriented Database Recovery," building directly on Gray's principles to encapsulate these properties as a standard for robust database systems. Atomicity ensures that a transaction is treated as a single, indivisible unit: either all of its operations are successfully completed (committed), or none of them take effect (aborted), preventing partial updates that could leave the database in an inconsistent state. Consistency requires that every brings the database from one valid state to another, adhering to all defined rules, constraints, triggers, and protocols without violating . Isolation guarantees that concurrent execute in a way that appears serial, meaning the intermediate states of one are hidden from others to avoid interference and ensure predictable outcomes. Durability mandates that once a has committed, its effects are permanently persisted, even in the event of power failures, crashes, or other disruptions, typically achieved through and mechanisms. These properties form the cornerstone of relational database management systems (RDBMS) and have influenced models in various distributed and databases, balancing reliability with performance in modern data-intensive applications. While ACID compliance provides strong guarantees for financial, , and systems requiring high data accuracy, it can introduce trade-offs in for highly distributed environments, leading to explorations of alternative models like BASE (Basically Available, Soft state, ).

Fundamentals

Definition and Purpose

The ACID model refers to a set of properties—Atomicity, , , and —that ensure the reliability of database transactions in database management systems (DBMS). These properties were first formalized by Jim Gray in his 1981 paper "The Transaction Concept: Virtues and Limitations," which introduced the foundational concepts of atomicity, , isolation (as independence), and to address the challenges of maintaining during state transformations in systems. The acronym ACID itself was coined in 1983 by Theo Härder and Andreas Reuter in their paper "Principles of Transaction-Oriented Database Recovery," providing a concise framework for evaluating recovery schemes in transaction-oriented databases. The primary purpose of the ACID model is to guarantee that database transactions are processed reliably, thereby preventing or inconsistencies arising from system errors, hardware crashes, or concurrent user access. In essence, ACID establishes a between the database system and its users, ensuring that the effects of a are predictable and preserved even under failure conditions. This reliability is crucial for applications where data accuracy is paramount, as it mitigates risks associated with partial updates or interference between operations. A , in this context, is a logical comprising one or more read or write operations on the database that must either succeed entirely or have no effect at all, treating the sequence as an indivisible whole. By enforcing these guarantees, the ACID model maintains the overall accuracy and trustworthiness of the data, enabling support for business-critical operations such as financial transfers in banking systems, where even a single failure could lead to significant discrepancies.

Historical Development

The concept of transactions in database systems emerged in the amid efforts to ensure reliability in relational databases, particularly through 's System R project, where Jim Gray contributed to foundational ideas for managing concurrent operations and recovery from failures using non-volatile storage mechanisms like . System R, developed from 1974 to 1979, introduced early transaction support to handle updates and in a prototype SQL-based system, addressing the limitations of in emerging online environments. In the early 1980s, while working on systems at after leaving in 1980, Jim Gray formalized the core properties of transactions in his seminal 1981 paper, describing atomicity (all-or-nothing execution), consistency (preservation of system invariants), durability (survival of committed effects against failures), and isolation (protection of concurrent transactions through mechanisms like locking). This work built directly on System R's innovations and emphasized the need for robust in fault-tolerant systems. The acronym ACID—standing for these properties—was coined two years later in 1983 by Theo Härder and Andreas Reuter in their paper on transaction-oriented database recovery, providing a concise framework to unify these virtues for database designers and implementers. The ACID model gained broader adoption through standardization efforts in the late 1980s and early , notably in the ANSI standard, which formalized levels (such as serializable, repeatable read, read committed, and read uncommitted) to support ACID compliance in commercial relational database management systems. Concurrently, extensions to distributed systems emerged, with protocols like two-phase commit—originally proposed by Gray in the late —becoming central in the for coordinating ACID transactions across multiple nodes, ensuring atomic commitment despite partitions and failures. A key milestone was Gray and Andreas Reuter's 1993 book Transaction Processing: Concepts and Techniques, which synthesized these developments into a comprehensive tutorial on implementing ACID in high-performance, distributed environments.

Core Properties

Atomicity

Atomicity is one of the core properties of the ACID paradigm in processing, ensuring that a is treated as an indivisible . It guarantees that either all operations within the transaction are successfully completed or none of them take effect, preventing any intermediate states from becoming visible to the system or other transactions. This all-or-nothing semantics protects the database from partial updates that could arise due to errors, failures, or interruptions during execution. The mechanism for achieving atomicity relies on commit and operations. A commits only when all its actions are verified as complete, at which point the changes are made permanent and visible to concurrent transactions; conversely, a aborts the and reverts all changes, restoring the database to its pre-transaction state. These operations are supported through protocols, where each action is recorded before application, enabling precise control over the transaction's outcome. Atomicity plays a critical role in error handling by preventing scenarios where partial failures leave the database in an inconsistent state, such as in a where one is debited but the recipient is not credited due to an interruption. By enforcing complete success or total failure, it safeguards against hardware faults, software bugs, or user aborts. This property is foundational for reliable in distributed and centralized systems alike. Atomicity ties into recovery mechanisms through logging techniques that facilitate undo and redo operations during system failures. Logs capture before-and-after images of data modifications, allowing the system to undo uncommitted changes (rollback) or redo committed ones to reconstruct the correct state from a stable checkpoint. This integration ensures that even after crashes, the all-or-nothing guarantee holds, complementing durability by persisting committed effects post-recovery.

Consistency

In the context of ACID transactions, consistency requires that each preserves the database's invariants, transforming it from one valid to another by adhering to predefined rules and constraints. This ensures that the database remains valid and reliable after every successful , as transactions must obey legal protocols governing the system's . The scope of consistency applies to the entire database upon transaction completion, encompassing not only the modified data but all interrelated elements that could affect overall validity. Invariants preserved by consistency include declarative constraints defined within the database schema, such as primary key constraints, which uniquely identify each row in a table and prevent duplicate or null values in the key column; foreign key constraints, which enforce referential integrity by ensuring that values in a referencing column match existing values in the referenced primary key column; and CHECK constraints, which restrict data to acceptable values through Boolean expressions, such as limiting a salary field to a range of $15,000 to $100,000. Additionally, consistency extends to application-level rules, which are business logic invariants enforced by the application code rather than the database engine, such as maintaining non-negative account balances in financial systems. Consistency is violated when a breaches any , resulting in an invalid database state, such as the loss of where a points to a non-existent , potentially creating orphaned or broken relationships between tables. Atomicity supports this by ensuring the consistent state is applied as a complete unit, without partial updates.

Isolation

Isolation in the context of ACID properties ensures that concurrent transactions execute in a manner that appears sequential, as if they were performed one at a time, thereby preventing between them and maintaining the of the database state. This property is fundamental to achieving correct concurrent access in database systems, where multiple transactions may overlap in time without producing anomalous results. The theoretical foundation of isolation lies in the concept of , which defines a concurrent of transactions as serializable if it is equivalent to some serial execution of those transactions in a specific order, preserving the outcome as if no concurrency occurred. To achieve isolation, database systems must guard against specific concurrency anomalies that can arise from partial visibility of changes across . A dirty read occurs when one transaction reads data modified by another transaction that has not yet committed, potentially incorporating uncommitted and later rolled-back changes. Non-repeatable reads happen when a transaction rereads data previously accessed within the same transaction but obtains different values due to modifications committed by an intervening transaction. reads arise when a transaction executes a query that returns a set of rows, but a subsequent execution of the same query within the transaction yields a different set, typically due to insertions or deletions by concurrent transactions. The ANSI standard defines four isolation levels to balance concurrency and consistency, each specifying which anomalies are permitted and thus trading off performance for stricter guarantees. Read Uncommitted, the least strict level, allows all anomalies including dirty reads, offering maximum concurrency but minimal . Read Committed prevents dirty reads by ensuring reads only from committed data but permits non-repeatable and phantom reads, providing a common default for many systems due to its efficiency. Repeatable Read blocks dirty and non-repeatable reads by holding read locks or using versioning but may still allow phantom reads, enhancing consistency at the cost of potential deadlocks. Serializable, the strictest level, prevents all listed anomalies by enforcing full , though it can reduce throughput in high-concurrency environments. These levels are implemented with varying overhead, where stricter isolation typically requires more locking or validation, impacting . Beyond read anomalies, isolation also mitigates write-related issues such as lost updates and write skew to ensure serializable outcomes. A lost update occurs when two transactions read the same initial value, each computes a modification independently, and then overwrites the data such that one transaction's update is discarded without awareness, violating the intended atomic change. Isolation mechanisms prevent this by serializing conflicting writes, often through exclusive locks on affected items. Write skew, a more subtle anomaly, happens when two transactions read overlapping but non-identical sets of data, each validates a condition based on those reads, and then performs writes to disjoint items that collectively violate a global constraint, such as a check involving multiple records. Serializable isolation avoids write skew by ensuring the overall execution is equivalent to a serial order, thereby upholding application invariants that span multiple transactions.

Durability

Durability ensures that the effects of a committed are permanently preserved, even in the face of failures such as crashes or power losses, thereby guaranteeing that the database state remains consistent with the committed changes. This property relies on transferring outcomes from to non-volatile , such as disk drives, to prevent loss of despite interruptions. A primary for achieving is (WAL), in which all modifications made by a are recorded sequentially in a log file on durable storage before the commit operation is acknowledged to the application. Under WAL, the log entries capture the before and after images of affected data, ensuring that the commit point is only declared after the relevant log records have been flushed to disk, thus making the changes recoverable. This approach minimizes the risk of partial updates by prioritizing log writes over direct data page modifications. During system recovery after a , the scans the WAL to replay (redo) operations from committed transactions that were not fully persisted to the main data structures, while reversing () any uncommitted changes to restore a consistent state. This process leverages log sequence numbers to identify the exact point of and apply only necessary operations efficiently. The (Algorithms for Recovery and Isolation Exploiting Semantics) recovery method exemplifies a standard approach to implementing , integrating WAL with support for fine-granularity locking, partial rollbacks, and semantic recovery actions to optimize performance and correctness. ensures that is both —treating the entire restart as an indivisible unit—and efficient, by avoiding unnecessary redo or undo steps through analysis of log records. This framework has influenced many modern database systems, providing a robust foundation for durable .

Illustrative Examples

Atomicity Scenario

Consider a banking where $100 is transferred from A, with an initial of $500, to B, with an initial of $200. This operation consists of two steps: debiting $100 from A and crediting $100 to B. In the success case, both the debit and credit operations execute fully and commit together, resulting in A having a of $400 and B having a of $300, with these updates becoming permanently visible to the system. In the failure case, if a occurs after the debit from A but before the to B, the entirely, restoring A to $500 and leaving B at $200, ensuring no partial changes persist. This atomicity principle is exemplified in real-world ATM withdrawals, where a debits the user's only if cash is successfully dispensed; a power failure mid-process triggers a to prevent unauthorized debits without delivery.

Consistency Breach

In order processing systems, a typical transaction involves checking the available stock quantity for an item, deducting the ordered amount if sufficient, and updating the inventory records accordingly. This ensures that the business invariant of non-negative stock levels is maintained throughout the operation. Without the consistency property, concurrent sales transactions could lead to overselling, where multiple orders are processed against the same limited stock, resulting in negative inventory quantities that violate the system's integrity constraints. For instance, if two customers simultaneously order the last unit of a product, both transactions might read the initial stock level of one, proceed to deduct it, and commit, leaving the inventory at negative one after both updates. The consistency property prevents this breach by enforcing that each transaction validates and preserves all database invariants, such as non-negative stock, before allowing a commit. To resolve potential violations, the transaction includes a pre-commit check: if the stock would drop below zero after deduction, the entire transaction rolls back, maintaining the database in a valid state. This mechanism, integral to ACID compliance, ensures that only valid transformations from one consistent state to another are permitted. Another application of consistency involves ensuring that total sales records align precisely with revenue entries, preventing discrepancies where recorded sales exceed actual income due to partial or erroneous updates. By treating these linked operations as a single transaction, the system upholds referential integrity and business rules across related tables.

Isolation Anomaly

In a hotel reservation system, consider a scenario where only one room remains available, with the database recording an availability count of 1. Two concurrent transactions, initiated by users A and B, each query the availability and read the value as 1 before proceeding to book the room. If the system operates at a low isolation level, such as Read Uncommitted or Read Committed, both transactions may update the count to 0 independently without observing each other's changes, resulting in overbooking where both reservations are confirmed despite the single room. This overbooking exemplifies isolation anomalies, including dirty reads—where a reads uncommitted modifications from another—and phantom reads, where a subsequent query within the same detects new or altered rows (e.g., an inserted booking record) that were not present initially. These phenomena arise because lower isolation levels permit partial visibility of concurrent changes, violating the property by allowing transactions to interfere as if executing out of order. To resolve such anomalies, isolation level must be employed, which enforces a serial order of transactions through mechanisms like locking or conflict detection, ensuring that user B's transaction sees the updated availability of 0 after user A's commit and fails the booking accordingly. This highest isolation level prevents all standard anomalies by guaranteeing equivalence to some execution history. However, serializable isolation introduces performance trade-offs, as it imposes locking overhead or requires transaction aborts and retries upon detecting conflicts, potentially reducing concurrency and throughput in high-contention environments compared to weaker levels.

Durability Test

In an e-commerce platform, a customer completes an order purchase, triggering a database transaction that deducts the item from inventory and records the payment details. Upon successful commit, the system confirms the order to the user, relying on the durability property to ensure these changes are permanently stored regardless of subsequent disruptions. Consider a failure scenario where a strikes immediately after the commits but before all data pages are fully written to disk. During , the database employs (WAL), a technique where changes are first appended to a sequential log file and flushed to persistent storage before the commit is acknowledged. This allows the system to replay (redo) the logged operations from the WAL, restoring the committed inventory deduction and payment record without duplication or loss, thereby preventing issues like oversold stock. In contrast, a non-durable without proper flushing—such as asynchronous or delayed modes—risks if a occurs before the log reaches stable storage. Here, the committed order could vanish from the database, forcing the customer to re-enter details and potentially leading to disputes over payment or inventory discrepancies. , for instance, highlights that uncommitted or unflushed changes during outages may not survive, underscoring the need for synchronous redo log writes to uphold . Modern cloud storage systems enhance durability beyond traditional WAL by replicating data across multiple zones or regions, achieving probabilities like 99.999999999% annual durability through erasure coding and automatic . , for example, stores objects redundantly in at least two zones before confirming a write, ensuring committed data persists even if an entire fails.

Implementation Approaches

Concurrency Control Techniques

Concurrency control techniques in single-node database systems are essential for ensuring the and properties of ACID transactions by managing simultaneous access to shared data. These methods prevent conflicts that could lead to inconsistent views or lost updates, allowing multiple transactions to execute concurrently without violating . Two primary approaches dominate: locking-based protocols, which restrict access to prevent conflicts, and (MVCC), which maintains multiple data versions to enable non-blocking reads. Locking mechanisms form the foundation of pessimistic concurrency control, where transactions acquire locks on data items before accessing them to avoid potential conflicts. Shared locks (also called read locks) permit multiple transactions to read the same data item simultaneously but block writes, while exclusive locks (write locks) grant sole access to a transaction for both reading and writing, preventing any concurrent access. These lock types ensure that conflicting operations—such as two writes or a read and write on the same item—do not overlap. To achieve serializability, the two-phase locking (2PL) protocol structures lock acquisition into a growing phase, where locks are obtained as needed, followed by a shrinking phase where locks are released only after the transaction commits or aborts, with no new locks acquired in between. This protocol guarantees conflict serializability, meaning the concurrent execution produces results equivalent to some serial order of transactions. A key challenge with locking is the potential for , where two or more each hold locks the other needs, forming a circular wait. Deadlock detection typically involves constructing a , where nodes represent and directed edges indicate one transaction waiting for a lock held by another; cycles in this signal a deadlock. Detection algorithms, such as on the graph, identify cycles periodically or on lock requests, after which the system resolves the deadlock by aborting one or more involved , often selecting the youngest or least costly one based on priority. Prevention strategies, like deadlock avoidance via adaptations, are less common due to their overhead but can be used in low-contention environments. Multiversion concurrency control (MVCC) addresses locking's limitations by maintaining multiple versions of each item, each tagged with a or identifier, allowing readers to access a consistent snapshot without blocking writers. When a updates a item, it creates a new version rather than overwriting the existing one; readers then select the appropriate version based on their start time, ensuring they see only committed changes from prior s. This approach provides , a weaker but practical form of where each operates on a consistent view of the database as of its starting point, avoiding many anomalies while permitting higher concurrency than strict . implements MVCC by appending new row versions to the table on updates and using visibility rules based on IDs to hide uncommitted or aborted versions from other s, with a vacuum process periodically cleaning up obsolete versions to manage storage. In comparison, locking protocols are pessimistic, assuming conflicts are likely and thus blocking access to serialize operations, which can reduce throughput in read-heavy workloads due to contention on locks. MVCC, conversely, adopts an optimistic stance by allowing concurrent reads and writes without immediate checks, deferring validation to commit time via version selection, which minimizes blocking but may lead to aborts if write skews occur under snapshot . These techniques support by preventing dirty reads and non-repeatable reads, while contributing to by enforcing rules that maintain database invariants across concurrent executions. The overhead of these methods impacts throughput significantly. Locking incurs costs from lock and wait times, reducing throughput under high contention due to frequent aborts and retries from lock thrashing. MVCC trades and collection overhead for improved read , achieving higher throughput in mixed read-write workloads compared to locking, though write-heavy scenarios may suffer from proliferation. Both ensure ACID and but require tuning, such as adjusting isolation levels, to balance performance and correctness.

Distributed Transaction Protocols

Distributed transaction protocols ensure that ACID properties are maintained across multiple independent database systems or nodes, coordinating actions to achieve , , , and in a networked . These protocols address the challenges of coordinating managers (RMs) through a transaction manager (TM), preventing partial commits that could lead to inconsistent states. Seminal approaches focus on atomic commitment, where all participants either commit or abort a collectively. The two-phase commit (2PC) protocol is a foundational method for atomic commitment in distributed systems, first described by Jim Gray in 1978. It operates in two distinct phases orchestrated by a coordinator, which acts as the central decision-maker among participating nodes. In the first phase, known as the prepare or voting phase, the coordinator sends a prepare request to all participants, prompting each to execute its local transaction portion, write prepare logs, and vote yes (ready to commit) or no (must abort). If any participant votes no or fails to respond, the coordinator decides to abort; otherwise, upon unanimous yes votes, it proceeds to the second phase. In the commit phase, the coordinator broadcasts a commit directive, and participants finalize by releasing locks and confirming completion, or an abort if needed. This ensures atomicity by guaranteeing all-or-nothing outcomes, with the coordinator logging its decision for recovery. However, 2PC is blocking: if the coordinator fails during the commit phase, participants remain in an uncertain prepared state, unable to proceed unilaterally until recovery, potentially leading to indefinite waits. To mitigate 2PC's blocking issues, the three-phase commit (3PC) , proposed by Dale Skeen in , introduces an additional preparation for non-blocking termination under site failures. Building on 2PC, 3PC adds a pre-commit after the prepare but before the final decision: if all participants are prepared, the sends a pre-commit , allowing participants to enter a ready-to-commit state without yet committing. Only then does the commit occur, with aborts handled similarly but without blocking operational sites. This structure ensures that no participant is left in a state where it must wait indefinitely for a failed or partitioned , as operational sites can resolve via a termination involving backups or decisions. 3PC requires more exchanges—typically three rounds—making it more resilient to single-point failures but still vulnerable to network partitions where communication is severed. Distributed transaction protocols face inherent challenges, particularly from network partitions and . Network partitions, where subsets of nodes lose connectivity, exacerbate blocking in 2PC, as partitioned components cannot confirm the global decision, reducing system until repair. Latency arises from the synchronous message rounds—2PC requires at least two round trips across potentially unreliable networks—amplifying delays in high- environments like wide-area networks, where even minor failures propagate system-wide stalls. These issues trade off for performance, often necessitating optimizations like presumed abort variants. In enterprise systems, the XA standard, specified by X/Open in 1991, provides a standardized interface for implementing 2PC-based distributed transactions across heterogeneous resources. XA defines APIs for transaction managers to coordinate resource managers (e.g., databases), enabling functions like xa_prepare for the voting phase and xa_commit for finalization, ensuring atomic updates in environments like Java EE. Widely adopted in systems such as JTA, XA supports recovery and , though it inherits 2PC's and blocking concerns in distributed setups.

Broader Context

ACID versus BASE Paradigm

The BASE paradigm emerged as a to the ACID model, emphasizing in distributed systems where strict is often sacrificed for . BASE stands for Basically Available, meaning the system remains operational even under high load or partial failures; Soft state, indicating that data may change over time without explicit updates; and , where replicas converge to a consistent state after some period, rather than immediately. This approach aligns with the , which posits that in the presence of , a distributed system can guarantee at most two of , , and tolerance, leading BASE systems to prioritize and tolerance over . In contrast, ACID transactions enforce strong guarantees of atomicity, , , and , making them ideal for domains like financial systems where data accuracy and immediate correctness are paramount to prevent errors such as or lost records. , however, suits high-availability web applications, as exemplified by Amazon's , which uses quorum-based replication to ensure reads and writes succeed despite node failures, trading immediate for throughput and . These differences highlight BASE's relaxation of ACID's core properties, particularly and , to enable horizontal scaling without the overhead of two-phase commits or locking mechanisms. The trade-offs between ACID and BASE revolve around rigidity versus flexibility: ACID's stringent rules can introduce bottlenecks in large-scale, partitioned environments by requiring across all nodes, potentially reducing during failures. BASE mitigates this through models, employing techniques like protocols for efficient coordination and anti-entropy mechanisms to resolve discrepancies asynchronously, thereby supporting massive at the cost of temporary data staleness. This flexibility proves advantageous in scenarios where demands uninterrupted access over perfect . Hybrid approaches bridge these paradigms by offering tunable , allowing applications to adjust guarantees per operation. For instance, enables developers to specify consistency levels—such as ONE for or ALL for —balancing ACID-like precision with BASE's resilience in distributed clusters. This configurability accommodates diverse workloads, from requiring occasional strong reads to tolerating eventual convergence.

ACID in Contemporary Systems

In modern NoSQL databases, ACID properties have been increasingly supported to address the needs of applications requiring without sacrificing flexibility. introduced multi-document ACID transactions in version 4.0, released in 2018, allowing operations across multiple documents, collections, and even databases within a single cluster, while providing snapshot isolation for reads. This feature enables developers to handle complex transactional logic in a document-oriented model, reducing the need for application-level compensation mechanisms. Similarly, Cloud Spanner leverages the TrueTime API, a globally synchronized clock with bounded uncertainty derived from GPS and clocks, to achieve external consistency and full ACID compliance at planetary scale, ensuring that transactions appear to occur in a total order despite geographical distribution. In cloud computing environments, serverless architectures integrate ACID transactions seamlessly with auto-scaling capabilities. Amazon Aurora Serverless v2, for instance, combines relational ACID guarantees—such as atomic commits and durable writes—with on-demand scaling from 0 to 256 Aurora Capacity Units (ACUs) with automatic pause at 0 when inactive, as of August 2025, allowing integration with AWS Lambda for event-driven applications that process thousands of transactions per second without manual provisioning. This setup maintains full SQL compatibility and high availability across multiple Availability Zones, supporting workloads like e-commerce order processing where consistency is paramount amid variable demand. Adapting ACID properties in contemporary systems often involves trade-offs with horizontal scaling, where distributing data across nodes can introduce latency and coordination overhead, potentially compromising or . Solutions include sharding with two-phase commit protocols for short transactions and the pattern for long-running ones, which decomposes distributed operations into a sequence of local ACID transactions, each followed by a compensating action if subsequent steps fail, thus approximating atomicity without global locks. Originating from early work on fault-tolerant workflows, sagas enable in by avoiding the performance bottlenecks of traditional distributed transactions. Looking ahead, technologies are being integrated with traditional to provide ACID-like guarantees in distributed ledgers, particularly for applications requiring tamper-proof auditability and global . Recent developments, such as hybrid blockchain-database systems, use mechanisms like proof-of-stake to ensure atomicity and across untrusted nodes, while sidechains or oracles bridge with ACID-compliant backends for in scenarios like tracking. These integrations aim to extend ACID properties to decentralized environments, though challenges like transaction finality delays persist.

References

  1. [1]
    Jim Gray Additional Materials - A.M. Turing Award Winner
    Durability: Once a transaction commits, the changes it made (writes and messages sent) survive any system failures. These are referred to as the ACID properties ...
  2. [2]
    [PDF] Jim Gray - The Transaction Concept: Virtues and Limitations
    The transaction concept emerges with the following properties: Consistency: the transaction must obey legal protocols. Atomicity: it either happens or it ...
  3. [3]
    (PDF) Principles of Transaction-Oriented Database Recovery
    PDF | On Jan 1, 1983, T. Haerder and others published Principles of Transaction-Oriented Database Recovery | Find, read and cite all the research you need ...
  4. [4]
    Why ACID transactions matter in an eventually consistent world
    Aug 9, 2016 · In 1983, Andreas Reuter and Theo Härder coined the term ACID to describe the properties of a reliable transactional system.
  5. [5]
    [PDF] Principles of Transaction-Oriented Database Recovery
    In this paper, a terminological framework is provided for describing different transaction- oriented recovery schemes for database systems in a conceptual ...
  6. [6]
    Jim Gray at IBM: The transaction processing revolution
    Jun 1, 2008 · Jim Gray defined and developed the fundamental concepts and techniques that underlie on-line transaction processing systems. Jim Gray's ...
  7. [7]
  8. [8]
    [PDF] ARIES: A Transaction Recovery Method Supporting Fine-Granularity ...
    In this paper we introduce a new recovery method, called. ARL?LSl. (Algorithm for Recovery and. Isolation. Exploiting. Semantics), which fares very well with.
  9. [9]
    Primary and foreign key constraints - SQL Server - Microsoft Learn
    Feb 4, 2025 · Primary keys and foreign keys are two types of constraints that can be used to enforce data integrity in SQL Server tables.Primary Key Constraints · Foreign Key Constraints · Referential Integrity
  10. [10]
    Unique constraints and check constraints - SQL - Microsoft Learn
    Feb 4, 2025 · UNIQUE constraints and CHECK constraints are two types of constraints that can be used to enforce data integrity in SQL Server tables.
  11. [11]
    [PDF] The Serializability of Concurrent Database Updates
    Serializability means a sequence of updates is as if users took turns, executing their entire transactions individually. This ensures the overall system is ...
  12. [12]
    Documentation: 18: 13.2. Transaction Isolation - PostgreSQL
    dirty read. A transaction reads data written by a concurrent uncommitted transaction. · nonrepeatable read. A transaction re-reads data it has previously read ...
  13. [13]
    [PDF] A Critique of ANSI SQL Isolation Levels - Microsoft
    Abstract: ANSI SQL-92 [MS, ANSI] defines Isolation. Levels in terms of phenomena: Dirty Reads, Non-Re- peatable Reads, and Phantoms. This paper shows that.
  14. [14]
    [PDF] adya-phd.pdf - Programming Methodology Group
    Mar 18, 1999 · Current commercial databases allow application programmers to trade off consistency for per- formance. However, existing definitions of weak ...
  15. [15]
    ARIES: a transaction recovery method supporting fine-granularity ...
    ARIES: a transaction recovery method supporting fine-granularity locking and partial rollbacks using write-ahead logging. Editor: Gio Wiederhold.
  16. [16]
    What is a Transaction? - Win32 apps - Microsoft Learn
    Jan 7, 2021 · For example, a bank transfer must be an atomic set of two operations: a debit from one account and a credit to another account.
  17. [17]
    Transactions
    A transaction is a logical, atomic unit of work that contains one or more SQL statements. A transaction groups SQL statements so that they are either all ...
  18. [18]
    Properties of transactions - IBM
    Transactions provide the ACID properties: For example, consider a transaction that transfers money from one account to another.
  19. [19]
    Transactions (MFC Data Access) - Microsoft Learn
    Aug 2, 2021 · Rollback allows a recovery from the changes and returns the database to the pretransaction state. For example, in an automated banking ...
  20. [20]
    What is Transaction Management? - IBM
    Returning to the ATM example, atomicity prevents a transaction from debiting money from a user's bank account before dispensing the actual cash.Overview · States of transactions
  21. [21]
    [PDF] ACID Properties of Transactions - Stony Brook Computer Science
    Banking Example (con't). • Global consistency -. – Sum of all account balances at bank branches = total assets recorded at main office. 15.
  22. [22]
    Everything you always wanted to know about SQL isolation levels ...
    Feb 8, 2024 · Transactions are completely isolated from each other, effectively serializing access to the database to prevent dirty reads, non-repeatable ...
  23. [23]
    Exploring Read Committed and Repeatable Read Isolation Levels
    Jul 17, 2024 · Dirty Reads: Occur when a transaction reads data that has been modified by another transaction but not yet committed. · Non-Repeatable Reads: ...
  24. [24]
    ACID Properties in DBMS - GeeksforGeeks
    Sep 8, 2025 · Transactions are fundamental operations that allow us to modify and retrieve data. · ACID stands for Atomicity, Consistency, Isolation, and ...Missing: origin | Show results with:origin
  25. [25]
    Documentation: 18: 28.3. Write-Ahead Logging (WAL) - PostgreSQL
    Write-Ahead Logging (WAL) is a standard method for ensuring data integrity. A detailed description can be found in most (if not all) books about transaction ...
  26. [26]
    Database Concepts
    ### Summary of Durability, Non-Durable Transactions, and Data Loss Examples from Oracle Database 19c Transactions Documentation
  27. [27]
    Data availability and durability  |  Cloud Storage  |  Google Cloud
    ### Summary of Cloud Storage Durability and Replication
  28. [28]
    The notions of consistency and predicate locks in a database system
    The notions of consistency and predicate locks in a database system ; K. P. Eswaran ; J. N. Gray ; R. A. Lorie ; I. L. Traiger.
  29. [29]
    Multiversion concurrency control—theory and algorithms
    This paper presents a theory for analyzing the correctness of concurrency control algorithms for multiversion database systems.
  30. [30]
    [PDF] Revisiting optimistic and pessimistic concurrency control - HPE Labs
    May 26, 2016 · Abstract: Optimistic concurrency control relies on end-of-transaction validation rather than lock acquisition prior to data accesses.
  31. [31]
    [PDF] The Transaction Concept: Virtues And Limitations
    The Transaction. Concept: Virtues and Limitations. Jim Gray. Tandem Computers Incorporated. 19333 Vallco Parkway. Cupertino Ca. 99014. ABSTRACT: A transaction.Missing: 1970s | Show results with:1970s
  32. [32]
    Site optimal termination protocols for a distributed database under ...
    commit protocol is a blocking protocol [SKEE-81b]. It is this blocking property that degrades the perfor- mance of the two-phase commit protocol in the pres-.
  33. [33]
    [PDF] Nonblocking Commit Protocols* - UT Computer Science
    We' presented two such nonblocking protocols: the three phase central site and the three phase distributed commit protocols. The three phase protocols were.Missing: skeem original
  34. [34]
    [PDF] Technical Standard Distributed Transaction Processing: The XA ...
    This document specifies the bidirectional interface between a transaction manager and resource manager (the XA interface). This document is a CAE specification ...
  35. [35]
    BASE: An Acid Alternative - ACM Queue
    Jul 28, 2008 · ACID database transactions greatly simplify the job of the application developer. As signified by the acronym, ACID transactions provide the following ...Missing: explanation | Show results with:explanation
  36. [36]
    [PDF] Brewer's Conjecture and the Feasibility of
    Seth Gilbert*. Nancy Lynch*. Abstract. When designing distributed web services, there are three properties that are commonly desired: consistency, avail ...
  37. [37]
    [PDF] Dynamo: Amazon's Highly Available Key-value Store
    This paper presents the design and implementation of Dynamo, another highly available and scalable distributed data store built for Amazon's platform. Dynamo is ...
  38. [38]
    Dynamo | Apache Cassandra Documentation
    Tunable Consistency. Cassandra supports a per-operation tradeoff between consistency and availability through Consistency Levels.Dataset Partitioning... · Multi-master Replication... · Distributed Cluster...
  39. [39]
    Transactions - Database Manual - MongoDB Docs
    MongoDB supports distributed transactions. With distributed transactions, transactions can be used across multiple operations, collections, databases, ...Production Considerations · Drivers API · Sharded Clusters · Operations
  40. [40]
    MongoDB ACID Transactions Whitepaper
    Support for multi-document ACID transactions debuted in the MongoDB 4.0 release in 2018, and were extended in 2019 with MongoDB 4.2 enabling Distributed ...
  41. [41]
    [PDF] Spanner: Google's Globally-Distributed Database
    Spanner is the first system to provide such guarantees at global scale. The key enabler of these properties is a new TrueTime API and its implementation. The ...
  42. [42]
    Spanner: TrueTime and external consistency
    TrueTime is a highly available, distributed clock that is provided to applications on all Google servers. TrueTime enables applications to generate ...
  43. [43]
    Amazon Aurora Serverless - AWS
    Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It automatically starts up, shuts down, and scales capacity up or down ...Cloudzero · Intuit · S&p Dow Jones Indices
  44. [44]
    Using Aurora Serverless v2 - AWS Documentation
    Aurora Serverless v2 helps to automate the processes of monitoring the workload and adjusting the capacity for your databases. Capacity is adjusted ...Managing Aurora Serverless... · How Aurora Serverless v2 works
  45. [45]
    a Hybrid-Optimistic Inter-Blockchain Communication Protocol
    Jul 16, 2025 · The transactions are kept in a distributed ledger as a linked list of signed blocks. ... properties, but also considering the ACID properties of ...
  46. [46]
    Pattern: Saga - Microservices.io
    A saga is a sequence of local transactions that update databases and trigger the next transaction, with compensating transactions if needed.A tour of two sagas · Transactional outbox · Tagged with · Command-side replica
  47. [47]
    A Survey on the Integration of Blockchains and Databases - PMC
    Apr 24, 2023 · In this survey, we discuss the use of blockchain technology in the data management field and focus on the fusion system of blockchains and databases.
  48. [48]
    Recent developments and challenges using blockchain techniques ...
    Dec 7, 2024 · IP Protection: Blockchain can be used to register and protect intellectual property rights, as seen in projects like Ascribe and Provenance. 7.