Fact-checked by Grok 2 weeks ago

Database transaction

A database transaction is a logical in a database that encompasses a sequence of read and write operations, executed as an indivisible whole to ensure and across concurrent user activities. This concept, fundamental to relational and many non-relational databases, originated in the as an extension of ideas to support multi-user environments, allowing multiple transactions to proceed simultaneously without . Transactions are typically delimited by begin and commit (or abort) commands, treating the enclosed operations as a single atomic action that either fully succeeds or is entirely rolled back in case of failure. The reliability of database transactions is defined by the properties—Atomicity, Consistency, Isolation, and Durability—which collectively guarantee valid data states despite errors, concurrency, or system crashes. Atomicity ensures that a transaction is treated as a single, indivisible unit: all operations complete successfully, or none take effect, preventing partial updates that could corrupt data. Consistency requires that a transaction brings the database from one valid state to another, preserving all defined integrity constraints such as keys, triggers, and business rules. Isolation provides the illusion that transactions execute serially, even when running concurrently, by managing locks and to avoid interference like dirty reads or lost updates. Durability mandates that once a transaction commits, its effects are permanently stored and survive subsequent failures, often achieved through and checkpointing mechanisms. These properties, first formalized as the acronym in by Theo Härder and Andreas Reuter building on earlier work by Jim Gray, enable robust in applications ranging from banking systems to platforms, where data accuracy and availability are paramount. Transaction management involves techniques like two-phase commit for distributed systems and recovery protocols to handle failures, ensuring scalability in modern cloud-based databases while adhering to ACID guarantees.

Fundamentals

Definition and Purpose

A database transaction is defined as a sequence of one or more operations, such as reads and writes, performed on a database that is treated as a single logical unit of work. This unit ensures that either all operations complete successfully, in which case the changes are permanently applied, or none are applied if any part fails, thereby maintaining the database in a consistent state. The term "logical unit" underscores that the transaction represents an indivisible block of work from the perspective of the application, abstracting away the underlying physical storage and access mechanisms. The primary purpose of database transactions is to safeguard data reliability in the face of system failures and concurrent access by multiple users. By enabling mechanisms, transactions prevent partial updates that could leave the database in an inconsistent or corrupted state, such as during crashes or power losses. Additionally, they provide , allowing concurrent transactions to execute without interfering with one another, which is essential for multi-user environments where simultaneous operations are common. Overall, these features ensure , meaning the database remains accurate and trustworthy even under adverse conditions. Transactions achieve these goals through properties collectively known as , which guarantee atomicity, , , and . The concept of database transactions emerged in the 1970s amid the development of systems, particularly with 's System R project initiated around 1974 at the IBM San Jose Research Laboratory. System R demonstrated the feasibility of with built-in support, addressing the need for operations to handle concurrency in production multi-user settings. This innovation was crucial as early databases transitioned from single-user to interactive, shared environments, where partial failures could otherwise compromise data reliability. An illustrative is in financial records, where every entry must balance across accounts to preserve overall integrity, much like a ensures balanced database changes.

ACID Properties

The ACID properties represent a set of fundamental guarantees that ensure the reliability and correctness of database transactions in the face of errors, failures, or concurrent access. Coined as an acronym in the early , ACID stands for , , , and , providing a framework for that has become a of management systems (RDBMS). These properties were formalized to address the challenges of maintaining in multi-user environments, where transactions must behave as indivisible units while preserving the overall state of the database. Atomicity ensures that a transaction is treated as an indivisible : either all of its operations are successfully completed, or none of them take effect, effectively rolling back any partial changes in case of failure. This property prevents databases from entering inconsistent states due to interruptions, such as system crashes or errors during execution, by leveraging mechanisms like transaction logs to uncommitted operations. For instance, in a bank transfer involving debiting one account and crediting another, atomicity guarantees that both actions occur together or not at all, avoiding scenarios where funds are deducted without being added elsewhere. Consistency requires that a transaction brings the database from one valid state to another, enforcing all predefined rules, constraints, and conditions, such as primary keys, foreign keys, and check constraints. Before and after the transaction, the database must satisfy these invariants; if a transaction would violate them, it must be aborted to maintain semantic correctness. This property relies on the application logic and to define validity, ensuring that transactions do not corrupt the —for example, preventing negative balances in an inventory system if business rules prohibit it. Isolation ensures that concurrent transactions do not interfere with each other, making each transaction appear to execute in even when running simultaneously. This prevents anomalies like dirty reads (reading uncommitted data), non-repeatable reads, or phantom reads, with the strongest level being , where the outcome matches some sequential execution order. Isolation is achieved through protocols, allowing multiple transactions to proceed without observing each other's intermediate states, thus preserving the illusion of atomic execution. Durability guarantees that once a has been committed, its changes are permanently persisted in the database, surviving any subsequent system failures, power losses, or crashes. This is typically implemented via (WAL), where changes are first recorded in a durable log before being applied to the main data structures, ensuring recovery mechanisms can reconstruct the committed state. For example, after a commit , the effects remain even if the system reboots, providing the reliability needed for critical applications like financial systems.

Transaction Management

Lifecycle and Operations

A database follows a defined lifecycle that ensures the of modifications, consisting of initiation, execution, termination through commit or , and associated support operations. The process begins when the database management system (DBMS) explicitly or implicitly starts a , assigning it a and allocating resources such as logs to track potential reversals. During execution, the performs a series of read and write operations on database objects, where reads retrieve without modification and writes update records, often involving temporary locks on affected resources to maintain consistency. These operations are buffered in memory where possible, with changes logged to persistent storage for recovery purposes. Key operations during the lifecycle include resource locking to prevent conflicting concurrent , change logging to enable from failures, and the use of savepoints as intermediate markers allowing partial without aborting the entire . Locking mechanisms, such as shared locks for reads and exclusive locks for writes, are acquired dynamically to serialize to data items. Logging records all modifications in a redo log or write-ahead log (WAL), ensuring that committed changes can be replayed during system to uphold . Savepoints divide the into nested subunits, permitting to a prior point if an error occurs in a later segment while preserving earlier work. The lifecycle concludes with either a commit, which makes all changes permanent, releases locks, and updates the database's consistent view, or a , which undoes all modifications using stored data to restore the pre-transaction . Error handling is integral, as any failure—such as a violation, , or system crash—triggers an automatic to prevent partial updates, with processes using logs to reconstruct the database to a known consistent . For illustration, consider a simple banking transfer scenario: The transaction begins by reading the balances of two accounts; if sufficient funds exist, it writes a debit to the source account and a to the destination, acquiring exclusive locks on both; upon successful verification, a commit finalizes the transfer, releasing locks and the changes; however, if funds are insufficient or an occurs, a restores the original balances, ensuring no money is lost or duplicated.

Isolation Levels and Concurrency Control

Database transactions require mechanisms to manage , ensuring that multiple transactions can execute simultaneously without compromising . Isolation levels define the degree to which one transaction must be isolated from the effects of other concurrent transactions, balancing against potential anomalies such as dirty reads, non-repeatable reads, and phantom reads. The ANSI SQL specifies four isolation levels—READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIZABLE—each permitting progressively fewer anomalies to achieve stronger guarantees. At the READ UNCOMMITTED level, transactions may read uncommitted changes from other transactions, allowing dirty reads where a transaction observes temporary that may later be rolled back. READ COMMITTED prevents dirty reads by ensuring reads only access committed but permits non-repeatable reads, where a transaction may see different values for the same row upon repeated reads due to commits by other transactions. REPEATABLE READ avoids both dirty and non-repeatable reads by locking read rows, yet it allows phantom reads, where new rows satisfying a query condition appear mid-transaction due to inserts by others. SERIALIZABLE provides the strictest , equivalent to executing transactions serially, preventing all three anomalies through techniques that ensure the outcome matches some serial order. The following table summarizes the ANSI SQL isolation levels and the anomalies they prevent:
Isolation LevelDirty ReadsNon-Repeatable ReadsPhantom Reads
READ UNCOMMITTEDAllowedAllowedAllowed
READ COMMITTEDPreventedAllowedAllowed
REPEATABLE READPreventedPreventedAllowed
SERIALIZABLEPreventedPreventedPrevented
Concurrency control techniques enforce these isolation levels by coordinating access to shared data. (2PL) is a pessimistic approach where transactions acquire locks in a growing phase and release them in a shrinking phase, ensuring by preventing cycles in the serialization graph. Timestamp ordering assigns unique timestamps to transactions and orders operations based on these timestamps, aborting those that would violate the order to maintain without locks. , in contrast, allows transactions to proceed without locks, performing reads and writes locally, then validating at commit time against concurrent changes; conflicts lead to aborts and restarts. These mechanisms involve trade-offs in performance, where stricter reduces concurrency and throughput but enhances . For instance, SERIALIZABLE often incurs higher lock contention and abort rates compared to READ COMMITTED, which supports greater parallelism at the cost of potential anomalies, leading to higher throughput in high-contention workloads. Lower isolation levels thus enable better in read-heavy environments by minimizing blocking. Recent developments extend these concepts for modern systems, such as snapshot isolation, which provides READ COMMITTED-like reads from a consistent snapshot while allowing concurrent writes, reducing anomalies beyond standard levels but not guaranteeing full . In , serializable snapshot isolation integrates with conflict detection to achieve SERIALIZABLE guarantees efficiently, with performance close to snapshot isolation in benchmarks, with serialization failure rates under 1% in evaluated workloads. This approach suits cloud databases by leveraging versioning to boost throughput in distributed settings.

Database Implementations

In Relational Databases

In relational databases, transaction management is standardized through SQL, which provides explicit commands to initiate, commit, or abort s, ensuring atomicity and consistency across (DML) and (DDL) operations. The SQL standard specifies START TRANSACTION (or equivalently BEGIN TRANSACTION in some implementations) to mark the beginning of a , COMMIT to permanently apply changes, and ROLLBACK to undo them, allowing partial rollbacks via SAVEPOINT for nested recovery points within a . These commands integrate seamlessly with DML statements like INSERT, UPDATE, and DELETE, as well as DDL such as CREATE or ALTER TABLE, where s ensure that schema changes are atomic and reversible if needed. SQL also defines mechanisms to control transaction isolation, mitigating concurrency issues like dirty reads or phantom reads through the SET TRANSACTION ISOLATION LEVEL statement, which supports four standard levels: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIZABLE. This command must be issued at the start of a transaction to enforce the desired level, balancing consistency with performance; for instance, READ COMMITTED prevents dirty reads but allows non-repeatable reads, as per the SQL:1992 specification. Prominent relational database systems exemplify these standards with engine-specific optimizations. In MySQL, the InnoDB storage engine provides full ACID-compliant transaction support, including row-level locking and crash recovery, and has been the default engine since version 5.5 in 2010, with enhancements in version 8.0 such as improved parallel query execution; as of November 2025, the current long-term support release is 8.4, maintaining these ACID guarantees with further performance optimizations. PostgreSQL implements transactions using Multi-Version Concurrency Control (MVCC), which creates snapshots of data versions to allow concurrent reads without blocking writes, supporting all SQL isolation levels while minimizing lock contention through visibility rules based on transaction timestamps. Historically, transaction support in relational databases evolved from early SQL implementations in the 1980s, with Oracle introducing commit/rollback operations in Version 3 (1983) and read consistency in Version 4 (1984) to handle concurrent access reliably.

In NoSQL and Object Databases

NoSQL databases often prioritize scalability and availability over strict adherence to ACID properties, adopting the BASE model—standing for Basically Available, Soft state, and Eventual consistency—instead. This approach ensures the system remains operational even during network partitions or failures, with data states that may temporarily diverge but converge over time through replication and conflict resolution mechanisms. Unlike relational systems, BASE enables horizontal scaling across distributed nodes without the overhead of immediate consistency guarantees, making it suitable for high-throughput applications like social media feeds or real-time analytics. In specific NoSQL implementations, transaction support varies to balance these trade-offs. introduced multi-document transactions in version 4.0 (released in 2018), allowing atomic operations across multiple documents, collections, and even databases within a single cluster. These transactions leverage snapshot isolation to provide while supporting sharded deployments since version 4.2; as of November 2025, 8.0 (2024) extends these capabilities for more efficient distributed transactions. , a , offers lightweight transactions (LWTs) using a compare-and-set mechanism based on the , enabling conditional updates like "insert if not exists" with linearizable for specific operations. However, LWTs are optimized for low-contention scenarios and incur higher latency due to coordination across replicas. Object databases handle transactions by directly managing object graphs, preserving , encapsulation, and relationships without the need for mappings. Systems like db4o support commits and rollbacks for entire object hierarchies, treating persistent objects as native extensions of in-memory ones during transactions. Similarly, Versant Object Database (now ) provides full transactions for complex object structures, including nested references and methods, often integrated via object-database mapping (ODM) tools to simplify persistence in object-oriented languages like or C++. This contrasts with relational databases, where complex data types require , joins, and impedance mismatch resolution, potentially leading to performance bottlenecks in graph-like queries. A key challenge in NoSQL and object databases is balancing consistency with distribution: strong ACID guarantees can introduce coordination overhead that hinders scalability in partitioned environments, often resulting in eventual consistency trade-offs to maintain availability. Recent advances address this; for instance, Amazon DynamoDB added support for ACID transactions in 2018, enabling atomic operations across multiple items and tables while preserving its serverless, globally distributed architecture. These enhancements use optimistic concurrency control to minimize conflicts, allowing developers to handle complex workflows like inventory updates without custom reconciliation logic.

Advanced Systems

Distributed Transactions

Distributed transactions involve coordinating atomic operations across multiple independent database nodes or systems, ensuring that either all participants commit their changes or none do, thereby maintaining the properties in a networked . This coordination is essential in scenarios such as multi-site enterprise applications or cloud-based services where data is replicated or sharded across geographically dispersed locations. The primary challenge lies in achieving despite potential failures, , and unreliable communication channels. The two-phase commit (2PC) protocol is a foundational mechanism for atomic commitment in distributed , consisting of a followed by a commit or abort . In the , a (transaction manager) sends a prepare request to all participating resource managers (e.g., database nodes), which vote "yes" if they can commit locally or "no" otherwise, often their state durably. If all votes are affirmative, the coordinator proceeds to the commit , instructing all participants to commit; otherwise, it issues an abort directive. This ensures agreement but can block if the coordinator fails after the . To address blocking issues in 2PC, particularly during coordinator failures, the three-phase commit (3PC) protocol introduces an additional pre-commit phase for enhanced . After the prepare phase (where participants confirm readiness), the coordinator sends a pre-commit message to all prepared nodes, allowing them to acknowledge without yet committing. Only then does the commit phase occur, enabling participants to recover decisions independently if the coordinator fails, as long as no more than a minority of nodes are faulty. However, 3PC operates in asynchronous networks using timeouts for failure detection and is more message-intensive, making it suitable for systems prioritizing non-blocking behavior over performance. The XA protocol, standardized by the X/Open group, provides an interface for implementing distributed in SQL environments, integrating 2PC with managers like databases. It defines functions for transaction managers to enlist managers (via xa_open/xa_close), start branches (xa_start), prepare votes (xa_prepare), and commit or (xa_commit/xa_rollback), ensuring atomicity across heterogeneous systems. Despite its robustness, XA faces challenges from partitions, where communication failures can lead to indefinite blocking or inconsistent states, requiring timeouts and mechanisms to resolve orphaned transactions. In architectures, where services often manage their own databases, the pattern serves as a flexible alternative to 2PC, decomposing long-running distributed transactions into a sequence of local transactions, each with compensating actions to undo partial failures. Originating from work on process models for extended transactions, avoid global locking by orchestrating via (event-driven) or (central coordinator), trading strict for in high-availability scenarios like order processing. Blockchain-inspired distributed ledgers extend transaction coordination through consensus mechanisms like proof-of-work or proof-of-stake, enabling trustless agreement across untrusted nodes without a central coordinator, as seen in systems like where transactions are validated and appended to an immutable chain. Recent cloud-native developments, such as Google Spanner's TrueTime , leverage synchronized clocks (via GPS and time) to assign bounded- timestamps to transactions, facilitating externally consistent global reads and writes without traditional 2PC overhead. TrueTime provides a time [earliest, latest] with uncertainty ε (typically 7ms), allowing Spanner to order commits globally while tolerating partitions through Paxos-based replication, achieving low-latency transactions at planetary scale.

Transactional File Systems

Transactional file systems apply principles of database transactions to file operations, enabling atomicity for actions such as creating, modifying, or deleting multiple as a single unit, ensuring that either all changes succeed or none are applied. This approach leverages mechanisms like journaling or to maintain consistency and recoverability, similar to ACID durability in but tailored to storage I/O layers. The evolution of transactional file systems began in the late 1980s with research in operating systems like , where log-structured file systems (LFS) were extended to support for fault-tolerant file operations. Early implementations focused on embedding transaction managers within LFS to handle updates and from crashes, laying the groundwork for broader adoption in production systems during the and . By the , these concepts advanced into modern designs that integrate with user-space applications and distributed environments. A prominent example is Microsoft's , which incorporates through its $LogFile to record metadata changes, ensuring recoverability after failures, and extends this with Transactional NTFS (TxF) for explicit user-level transactions on file operations like renames or deletions across files. TxF uses the Kernel Transaction Manager to provide atomicity, allowing applications to group operations and roll back on errors, though it introduces some overhead due to additional . However, has indicated that TxF may be deprecated in future Windows versions, advising developers to explore alternatives like ReplaceFile API or database solutions. Reiser4, developed for as a successor to , introduces advanced transactional capabilities with support for user-defined transaction models, enabling atomic operations across file boundaries via a redo-only write-ahead log and plugin-based extensibility. Development stalled following the 2008 conviction of its creator, , for second-degree murder, which dissolved Namesys and prevented kernel integration. Despite its innovative design for handling complex, multi-file updates efficiently, Reiser4 remains an out-of-tree with limited mainstream adoption due to integration challenges with the . ZFS, originally from , with its open-source development maintained by the community and a proprietary version by , operates as a transactional using copy-on-write semantics to ensure all modifications are atomic, preventing partial updates during crashes. Its snapshots function as pseudo-transactions by capturing consistent point-in-time views of the with minimal overhead, supporting versioning and while consuming space only for diverged data. These systems offer key benefits, including robust crash recovery for interrupted file operations and built-in versioning to preserve historical states, which enhances in environments prone to failures. However, they often incur performance limitations from the overhead of or , potentially reducing throughput for high-frequency small-file workloads compared to non-transactional alternatives. In contemporary cloud storage, transactional principles have evolved into distributed implementations, such as TxFS on Linux-based systems, which builds on journaling file systems like to provide guarantees for application-level file transactions without kernel modifications. This progression supports scalable, fault-tolerant storage in cloud environments, extending early OS-level innovations to handle networked and elastic workloads.

References

  1. [1]
    [PDF] Jim Gray - The Transaction Concept: Virtues and Limitations
    This paper restates the transaction concepts and attempts to put several implementation approaches in perspective. It then describes some areas which require ...Missing: original | Show results with:original
  2. [2]
    [PDF] Principles of Transaction-Oriented Database Recovery
    This paper provides a framework for transaction-oriented recovery using terms like materialized database, propagation strategy, and checkpoint, and relates to ...
  3. [3]
    Principles of transaction-oriented database recovery
    This paper, 'Principles of transaction-oriented database recovery', is by Theo Haerder and Andreas Reuter, published in ACM Computing Surveys.
  4. [4]
    Transactions - Oracle Help Center
    A transaction is a logical, atomic unit of work that contains one or more SQL statements. A transaction groups SQL statements so that they are either all ...End Of A Transaction · Active Transactions · Logical Transaction Id
  5. [5]
    Db2 12 - Introduction - Unit of work - IBM
    A unit of work is a recoverable sequence of operations within an application process. A unit of work is sometimes called a logical unit of work.
  6. [6]
    ACID Transactions in Databases - Databricks
    ACID is an acronym that refers to the set of 4 key properties that define a transaction: Atomicity, Consistency, Isolation, and Durability.Missing: authoritative | Show results with:authoritative
  7. [7]
    [PDF] Chapter 17: Transactions - Database System Concepts
    transaction T2 is allowed to access the partially updated database, it will see an inconsistent database (the sum A + B will be less than it should be). T1. T2.
  8. [8]
    Understanding ACID Compliance | Teradata
    Isolation ensures that concurrent transactions occur separately from one another, preventing them from interfering with each other's executions and outcomes.<|control11|><|separator|>
  9. [9]
    ACID Transactions in DBMS Explained - MongoDB
    ACID is an acronym that stands for atomicity, consistency, isolation, and durability. Together, ACID properties ensure that a set of database operations ( ...Missing: authoritative | Show results with:authoritative
  10. [10]
    A history and evaluation of System R | Communications of the ACM
    This paper describes the three principal phases of the System R project and discusses some of the lessons learned from System R about the design of relational ...
  11. [11]
    The relational database - IBM
    A group of programmers in 1973 undertook an industrial-strength implementation: the System R project. The team included Chamberlin and Boyce, as well as ...
  12. [12]
    [PDF] INTEGRITY IN AUTOMATED INFORMATION SYSTEMS
    This principle is also used to support atomicity in database transaction processing through ... Double entry bookkeeping ensures internal data consistency by ...
  13. [13]
    [PDF] CAP Twelve Years Later: How the “Rules” Have Changed
    The ACID properties focus on consistency and are the traditional approach of databases. My colleagues and I created BASE in the late 1990s to capture the ...
  14. [14]
    None
    Nothing is retrieved...<|control11|><|separator|>
  15. [15]
    [PDF] A Critique of ANSI SQL Isolation Levels - Microsoft
    ANSI SQL isolation levels are defined by phenomena, but this paper argues they are ambiguous and fail to characterize popular levels, and the anomaly approach ...
  16. [16]
    [PDF] TIMESTAMP-BASED ALGORITHMS FOR CONCURRENCY ...
    In this paper we present a framework for the design and analysis of concurrency control algorithms for distributed database management systems. (DDBMS). This.
  17. [17]
    [PDF] On Optimistic Methods for Concurrency Control - Computer Science
    The methods used are “optimistic” in the sense that they rely mainly on transaction backup as a control mechanism, “hoping” that conflicts between transactions ...
  18. [18]
    [PDF] Quantifying Isolation Anomalies - VLDB Endowment
    ABSTRACT. Choosing a weak isolation level such as Read Committed is understood as a trade-off, where less isolation means that higher performance is gained ...
  19. [19]
    [PDF] Generalized Isolation Level Definitions
    This paper gives new, precise definitions of the ANSI-. SQL isolation levels [6]. Unlike previous proposals [13, 6,. 8], the new definitions are both correct ( ...<|control11|><|separator|>
  20. [20]
    [PDF] Serializable Snapshot Isolation in PostgreSQL
    This paper describes our experience implementing PostgreSQL's new serializable isolation level. It is based on the recently-developed. Serializable Snapshot ...
  21. [21]
    [1208.4179] Serializable Snapshot Isolation in PostgreSQL - arXiv
    Aug 21, 2012 · This paper describes our experience implementing PostgreSQL's new serializable isolation level. It is based on the recently-developed Serializable Snapshot ...
  22. [22]
    15.3.1 START TRANSACTION, COMMIT, and ROLLBACK Statements
    START TRANSACTION is standard SQL syntax, is the recommended way to start an ad-hoc transaction, and permits modifiers that BEGIN does not.
  23. [23]
    Documentation: 18: 13.2. Transaction Isolation - PostgreSQL
    The SQL standard defines four levels of transaction isolation. The most strict is Serializable, which is defined by the standard in a paragraph which says that ...
  24. [24]
    SET TRANSACTION ISOLATION LEVEL (Transact-SQL) - SQL Server
    Sep 29, 2025 · SET TRANSACTION ISOLATION LEVEL takes effect at execute or run time, and not at parse time. Optimized bulk load operations on heaps block ...
  25. [25]
    Still using MyISAM ? It is time to switch to InnoDB ! - Oracle Blogs
    Mar 7, 2023 · InnoDB has been the default storage engine for MySQL since version 5.5 (July 2010!). If you are still using MyISAM as the storage engine for ...
  26. [26]
    Documentation: 18: 13.1. Introduction - PostgreSQL
    The main advantage of using the MVCC model of concurrency control rather than locking is that in MVCC locks acquired for querying (reading) data do not conflict ...
  27. [27]
    A history of Oracle
    RSI first offered Oracle commercially in the summer of 1979, on Digital Equipment PDP-11 minicomputers. The new database product incorporated a reasonably ...
  28. [28]
    What's the Difference Between an ACID and a BASE Database?
    ACID and BASE are database transaction models that determine how a database organizes and manipulates data.Missing: authoritative | Show results with:authoritative
  29. [29]
    Data Consistency Models: ACID vs. BASE Explained - Neo4j
    Aug 11, 2023 · The BASE Model​​ As a result, NoSQL databases that rely heavily on horizontal scaling for performance often use the BASE transaction model. ( ...
  30. [30]
    What Is NoSQL? NoSQL Databases Explained - MongoDB
    BASE compliance​​ NoSQL databases are BASE compliant, i.e., basic availability soft state eventual consistency. Basic availability refers to the ability of the ...
  31. [31]
    Transactions - Database Manual - MongoDB Docs
    With distributed transactions, transactions can be used across multiple operations, collections, databases, documents, and shards. The information on this page ...Production Considerations · Drivers API · Sharded Clusters · Operations
  32. [32]
    MongoDB 4 Update: Multi-Document ACID Transactions
    Jun 27, 2018 · In MongoDB 4.0, transactions work across a replica set, and MongoDB 4.2 will extend support to transactions across a sharded deployment.
  33. [33]
    Guarantees | Apache Cassandra Documentation
    Lightweight transactions with linearizable consistency. Batched writes across multiple tables are guaranteed to succeed completely or not at all. Secondary ...
  34. [34]
    Using lightweight transactions | CQL for Cassandra 3.0
    Feb 18, 2022 · Lightweight transactions, also known as Compare and Set (CAS), are supported by INSERT and UPDATE statements using the IF clause. They are used ...
  35. [35]
    [PDF] The Definitive Guide to db4o - College of Science and Engineering
    ... Transactions ... db4o—the database for objects—simply stores native objects. “Native” means.
  36. [36]
    Actian NoSQL Object Databases
    Formerly known as Versant Object Database or VOD, Actian NoSQL database simplifies how software developers handle transactional database requirements for ...
  37. [37]
    Difference between RDBMS and OODBMS - GeeksforGeeks
    Jul 11, 2025 · An object-oriented database stores complex data as compared to a relational database. Some examples of OODBMS are Versant Object Database, ...
  38. [38]
    why are noSQL databases more scalable than SQL?
    Apr 8, 2013 · Most of the performance gains in "typical" NoSQL DBs are achieved through loosening of consistency guarantees (see: eventual consistency, ACID ...
  39. [39]
    Amazon DynamoDB Transactions: How it works
    Learn how DynamoDB transactions work, including API operations, capacity management, error handling, best practices, and details for using transactional ...Transaction conflict handling · Transactions in DAX · Capacity management
  40. [40]
    New – Amazon DynamoDB Transactions | AWS News Blog
    Nov 27, 2018 · DynamoDB transactions provide developers atomicity, consistency, isolation, and durability (ACID) across one or more tables within a single AWS ...
  41. [41]
    [PDF] Spanner: Google's Globally-Distributed Database
    It is the first system to distribute data at global scale and sup- port externally-consistent distributed transactions. This paper describes how Spanner is ...
  42. [42]
    [PDF] Consensus on Transaction Commit - Leslie Lamport
    The Two-Phase Commit protocol is an implementation of transaction com- mit that uses a transaction manager (TM) process to coordinate the decision- making ...
  43. [43]
    [PDF] Consensus on Transaction Commit - s2.SMU
    The requirements for transaction commit are stated precisely in Section 2. The classic transaction commit protocol is Two-Phase Commit [Gray 1978],.Missing: paper | Show results with:paper
  44. [44]
    [PDF] Nonblocking Commit Protocols* - UT Computer Science
    We' presented two such nonblocking protocols: the three phase central site and the three phase distributed commit protocols. The three phase protocols were.
  45. [45]
    [PDF] Technical Standard Distributed Transaction Processing: The XA ...
    X/Open is developing other DTP interfaces for direct use by an application program (see Section 2.1 on page 3 for an overview).
  46. [46]
    [PDF] TxFS: Leveraging File-System Crash Consistency to Provide ACID ...
    Jul 13, 2018 · Abstract. We introduce TxFS, a novel transactional file system that builds upon a file system's atomic-update mechanism such as journaling.<|separator|>
  47. [47]
    ZFS Pooled Storage
    ZFS is a transactional file system, which means that the file system state is always consistent on disk. Traditional file systems overwrite data in place, which ...
  48. [48]
    [PDF] Transaction Support in a Log-Structured File System - seltzer.com
    Abstract. This paper presents the design and implementation of a transaction manager embedded in a log-structured file system [11].
  49. [49]
    Transactional NTFS (TxF) - Win32 apps - Microsoft Learn
    Jul 20, 2022 · Transactional NTFS (TxF) allows file operations on an NTFS file system volume to be performed in a transaction.Missing: journaling | Show results with:journaling
  50. [50]
    NTFS overview - Microsoft Learn
    Jun 18, 2025 · NTFS enhances reliability by maintaining a transaction-based log file and checkpoint information. If a system failure occurs, NTFS uses this log ...Missing: journaling | Show results with:journaling
  51. [51]
    Reiser4 Transaction Design Document - LWN.net
    ... transaction management. However, this issue affects fragmentation in the file system and therefore influences performance of the transaction system in general.
  52. [52]
    Overview of ZFS Snapshots
    A snapshot is a read-only copy of a file system or volume. Snapshots can be created almost instantly, and they initially consume no additional disk space within ...
  53. [53]
    About Transactional NTFS - Win32 apps - Microsoft Learn
    Jan 7, 2021 · Transactional NTFS (TxF) integrates transactions into the NTFS file system, which makes it easier for application developers and administrators ...Missing: journaling | Show results with:journaling