Fact-checked by Grok 2 weeks ago

Multi-master replication

Multi-master replication is a database replication in distributed systems where multiple s, known as master sites, can independently perform read and write operations on shared data, with updates propagated asynchronously or synchronously to all participating s to achieve eventual or immediate . This approach contrasts with single-master replication, where only one designated accepts writes while others serve as read-only replicas. In a multi-master setup, each master maintains a change log or queue of , which are then forwarded to other masters via database links or replication protocols, ensuring that modifications such as inserts, updates, and deletes are synchronized across the group. Asynchronous modes, in many systems, defer using store-and-forward mechanisms, allowing temporary resolved through detection at the row or level. Synchronous modes require changes to be acknowledged by all nodes before committing the original , often using two-phase commit protocols to enforce strict consistency. The primary benefits of multi-master replication include enhanced through automatic , load balancing of write operations across geographically distributed sites, and support for disconnected environments such as mobile or remote access scenarios. Similarly, EDB Replication Server enables on-demand or scheduled synchronization among primary nodes, facilitating scalable data sharing in enterprise environments. However, multi-master replication introduces significant challenges, particularly in , where concurrent updates on different masters—such as update conflicts, uniqueness violations, or delete cascades—must be handled using methods like site priority, timestamp-based last-writer-wins, or custom row-level rules. Asynchronous propagation can lead to loose consistency and potential in network partitions, while synchronous approaches demand robust, low-latency to avoid performance bottlenecks from locking and commit delays. These complexities often require careful configuration, such as defining a primary definition for administrative and tools to track errors.

Core Concepts

Definition and Overview

Multi-master replication is a replication strategy in distributed systems where multiple s, designated as s, can independently accept both read and write operations, with subsequent changes propagated to all participating s to maintain across the system. This approach enables a write-anywhere architecture, allowing clients to direct writes to any without through a central authority, in contrast to setups involving read-only replicas that cannot process updates. Propagation of changes can occur asynchronously, where updates are sent periodically or on a schedule to minimize , or synchronously, ensuring immediate but potentially at the cost of higher coordination overhead. The concept emerged in the amid growing demands for in enterprise environments, evolving from earlier single-master models that relied on a single authoritative for writes to better support geo-distributed workloads and . As networks became more reliable and applications required higher availability, multi-master designs addressed limitations of centralized replication by distributing write responsibilities, facilitating scenarios like global content delivery and collaborative databases. Understanding multi-master replication builds on the broader principle of data replication in distributed systems, where copies of data are maintained across multiple nodes to enhance reliability and performance, though it specifically emphasizes bidirectional synchronization among equals rather than hierarchical structures. This foundational model underpins many modern databases and cloud services, enabling scalable operations in environments prone to partitions or failures.

Comparison with Other Replication Models

Multi-master replication differs from single-master replication, also known as master-slave or primary-, in which a single designated node handles all write operations while multiple read-only s receive asynchronous or synchronous updates from the primary. This topology avoids write bottlenecks at a single point but introduces complexity, as promoting a to primary requires manual or automated intervention, potentially leading to brief downtime during leader elections. In contrast, multi-master allows writes on any master node, distributing write load and enabling automatic without designated primaries, though it demands robust to maintain across concurrent updates. Compared to leaderless replication, as exemplified in Amazon's system, multi-master replication designates specific nodes as writable masters, whereas leaderless approaches permit writes to any node in the cluster without a fixed leader. Leaderless systems rely on quorum-based protocols—requiring a majority of nodes to acknowledge writes and reads—to balance availability and durability, often favoring over strong guarantees to handle partitions gracefully. Multi-master, however, typically enforces stronger consistency models within the master set through synchronized propagation, but it can suffer higher coordination overhead than leaderless quorums, making the latter more suitable for high-throughput, partition-tolerant workloads. Hybrid models blend elements of these topologies, evolving from single-master setups by incorporating multi-master capabilities in select scenarios to optimize for both and ; for instance, a primary uses single-master for local efficiency, while cross- links employ multi-master for global resilience. This evolution addresses limitations in pure single-master systems, such as geographic , by selectively enabling multi-writes in distributed environments without fully decentralizing control. Multi-master replication excels in use cases requiring low-latency writes across geographies, such as global or collaborative applications, where distributing masters reduces delays compared to centralized single-master topologies. Conversely, single-master suffices for centralized (OLTP) workloads, like regional banking systems, where simpler and outweigh the need for write scaling. Leaderless models, meanwhile, suit highly available key-value stores with tunable consistency, but multi-master provides a middle ground for relational databases needing predictable synchronization in multi-region deployments.

Technical Mechanisms

Synchronization Processes

In multi-master replication, synchronization processes primarily revolve around propagating updates across multiple master nodes to maintain data consistency. These processes typically employ protocols that capture, transmit, and apply changes in a coordinated manner, balancing reliability with operational efficiency. Propagation can occur asynchronously or synchronously. Asynchronous propagation is non-blocking, where the originating master acknowledges the client write immediately after logging the change locally, without waiting for remote confirmations; this enables high throughput and eventual consistency but risks temporary data divergence during failures. Synchronous propagation, conversely, blocks the write operation until a specified number of remote masters confirm receipt and application of the update, ensuring stronger immediate consistency at the expense of higher response times. The choice between these modes depends on workload demands, with asynchronous suiting high-write scenarios and synchronous prioritizing durability. To capture changes for propagation, log-based mechanisms are fundamental. Write-ahead logging (WAL) records all modifications to the database prior to their commitment to disk, providing a durable sequence of operations that can be streamed or shipped to other nodes for replay. Similarly, binary logs maintain a record of executed statements or row-level alterations, facilitating efficient transmission of updates in a serialized format. Change data capture (CDC) builds on these logs by extracting only incremental modifications—such as inserts, updates, and deletes—and formatting them for targeted replication, minimizing data volume transferred compared to full snapshots. Synchronization topologies define how changes flow between masters, influencing resilience and overhead. In a ring topology, nodes connect sequentially in a circular fashion, with each passing updates to its neighbors until all receive them; this distributes load evenly but requires careful management to avoid bottlenecks during propagation loops. A full topology connects every master directly to every other, enabling rapid direct dissemination but scaling poorly with node count due to quadratic connection growth. Hub-and-spoke topologies route changes through a central that reconciles and forwards them to peripheral spokes, simplifying management in hierarchical environments while concentrating load on the hub. These structures must handle partitions—temporary disconnections dividing the cluster—often by pausing non-quorum writes or queuing updates for resynchronization upon reconnection to prevent permanent splits. Performance in these processes is shaped by , , and failure recovery. Asynchronous modes introduce latency, typically measured in seconds for cross-region setups, allowing reads from any master but potentially serving stale data briefly. Bandwidth usage scales with change volume, as log shipping transmits only deltas, though full resyncs after prolonged outages can consume significant resources. Retry logic addresses failed propagations through mechanisms like and persistent queuing, ensuring delivery without overwhelming the network, though excessive retries can amplify in unstable conditions. Concurrent writes across nodes may introduce brief risks during propagation, necessitating downstream detection.

Conflict Detection and Resolution

In multi-master replication, conflicts occur when divergent updates to the same data item propagate asynchronously across replicas, leading to inconsistencies that must be detected during . Detection methods rely on attached to data versions to identify such divergences. Version vectors, consisting of counters for each replica's updates, enable efficient detection of mutual inconsistencies by comparing whether one vector dominates another or if they indicate concurrent modifications. Similarly, vector clocks assign a vector of logical timestamps to events, capturing causal dependencies; incomparable vectors signal concurrent, potentially conflicting operations. Timestamps, often derived from physical clocks or hybrid logical clocks, provide a simpler mechanism to flag updates by recording apparent occurrence times, though they risk inaccuracies due to . Once detected, conflicts require resolution strategies to reconcile divergent states and restore . The last-write-wins (LWW) approach selects the update with the latest , discarding others, which is straightforward but may lead to if timestamps are imprecise. Alternatives include first-write-wins policies, which prioritize the earliest timestamped update, or custom merge functions tailored to application semantics, such as combining sets or averaging numerical values. Manual intervention, where human operators or application logic resolve ambiguities, offers flexibility but increases operational overhead. Conflict-Free Replicated Data Types (CRDTs) address these issues proactively by designing structures—such as counters (e.g., G-Counters), sets (e.g., OR-Sets), or sequences (e.g., RGAs)—whose operations are commutative or form a join , allowing automatic merging via least upper bounds without explicit conflict handling. Quorum-based techniques mitigate conflicts during operations rather than post-detection, by enforcing read quorums R and write quorums W such that W + R > N (where N is the total number of replicas), ensuring that any two writes overlap in at least one to prevent unobserved divergences. This approach, as implemented in systems like , trades some availability for higher consistency guarantees during normal operation. These detection and resolution mechanisms embody trade-offs dictated by the , which demonstrates that distributed systems cannot simultaneously provide consistency, availability, and partition tolerance; multi-master setups typically favor availability and partition tolerance, accepting over strict during network partitions.

Benefits

High Availability and Fault Tolerance

Multi-master replication enhances by distributing write capabilities across multiple master nodes, eliminating the inherent in single-master architectures and allowing the system to continue operations even if individual nodes fail. In this setup, each master can independently process transactions, and the surviving nodes maintain service continuity without requiring manual intervention for basic operations. This decentralized approach ensures that read and write requests can be routed to any available master, providing seamless redundancy against hardware, software, or network disruptions. Failover dynamics in multi-master systems rely on automatic mechanisms where, upon detecting a , the remaining masters continue to accept and process workloads without promotion steps, as all masters are inherently active. For instance, in distributed databases like Global Tables, applications can redirect traffic to unaffected regions instantaneously, avoiding downtime associated with electing a new primary. This contrasts with master-slave models by enabling true multi-active , where is often transparent and limited to reconfiguring client connections rather than halting the entire system. Redundancy benefits are amplified through geo-replication, where data is synchronously or asynchronously mirrored across geographically dispersed data centers to support and minimize downtime from regional outages or network partitions. By maintaining identical datasets on multiple masters in different locations, systems can achieve rapid recovery, often with zero if using protocols, thereby reducing recovery time objectives to seconds or minutes. This geo-distributed not only tolerates node or site failures but also protects against broader events like , ensuring business continuity. Multi-site deployments leveraging multi-master replication can attain five-nines availability (99.999%), equating to less than 5.26 minutes of unplanned per year, as demonstrated in services like DynamoDB Global Tables. In real-world applications, this resilience has enabled platforms using multi-primary setups to sustain operations during server outages. Such synergizes with load distribution to support peak traffic without compromising uptime.

Scalability and Load Distribution

Multi-master replication facilitates horizontal scaling by enabling the addition of multiple master to distribute write operations across geographically dispersed locations or data shards, thereby accommodating growth in data volume and query loads without centralizing updates on a single . In systems like the key-value store, this approach leverages multi-master selective replication to achieve linear throughput increases as are added, supporting workloads with varying access patterns such as those in distributed caches. By partitioning data and assigning mastership dynamically, replication overhead is minimized, allowing clusters to expand to dozens of while maintaining performance. Unlike single-master models, which limit write scaling to the primary node and rely on read replicas for query distribution, multi-master setups permit all nodes to process both reads and writes, enabling balanced load distribution and preventing bottlenecks in write-intensive applications. For instance, Group Replication demonstrates this by routing transactions across all members, achieving up to 84% of asynchronous replication throughput on mixed read-write benchmarks with nine nodes, compared to single-master constraints. This read-write symmetry supports elastic environments where traffic spikes can be handled by redirecting operations to underutilized masters. Elasticity in multi-master replication is enhanced through mechanisms for dynamic node addition or removal with minimal reconfiguration, allowing systems to adapt to fluctuating demands without downtime. The DynaMast framework, for example, uses adaptive remastering to transfer data ownership based on learned access patterns, enabling seamless scaling and up to 1.6 times throughput gains under changing workloads. In high-traffic scenarios akin to feeds, such elasticity has been shown to boost overall system throughput; Anna's multi-master design, for instance, delivers 350 million operations per second under contention-heavy loads, outperforming traditional stores by 10 times in distributed settings with hot-key access patterns common in feed generation.

Challenges

Consistency Trade-offs

Multi-master replication inherently involves trade-offs in data to achieve higher and partition tolerance in distributed environments. Unlike models, where all reads reflect the most recent write across all nodes, multi-master systems often adopt , ensuring that updates propagate asynchronously and replicas converge to the same state over time if no further writes occur. This approach allows writes on any master but introduces delays in synchronization, leading to temporary inconsistencies where nodes may operate with divergent data views. For instance, in Amazon's , is implemented using vector clocks to track versions, enabling but permitting reads to return outdated values until reconciliation. The underscores these compromises, stating that in the presence of , a distributed system cannot simultaneously guarantee both and . Multi-master replication typically prioritizes and (AP systems), accepting reduced during partitions to prevent ; for example, writes may succeed on isolated nodes, but reads from other nodes could reflect stale data until reconnection and propagation. This design choice, as articulated by Brewer, means that would require blocking operations during uncertainty, which undermines the scalability benefits of multi-master setups. In practice, systems like employ "sloppy s" to maintain , where reads and writes proceed with relaxed quorum requirements, further emphasizing the availability- trade-off. Stale reads and writes exacerbate these issues, manifesting as anomalies such as lost updates and write skews with significant business implications. A lost update occurs when two transactions read the same value from different masters, each incrementing it locally before propagation, resulting in one update overwriting the other and — for example, in inventory systems, this could lead to items. Write skews arise when concurrent transactions read overlapping but non-conflicting data, then perform writes that violate application-level constraints upon , such as two reservations booking the last two seats in a venue based on a stale total availability count, potentially causing overcommitment in ticketing or financial applications. These anomalies, common under snapshot isolation in replicated environments, can erode trust and require manual intervention, highlighting the operational risks of relaxed . To mitigate these trade-offs, multi-master systems tune parameters—such as requiring writes to a subset of nodes (W) and reads from another (R) where W + R > N (total replicas)—to bound staleness while preserving availability; , for example, defaults to N=3, R=2, W=2 for tunable . Additionally, multi-version (MVCC) allows readers to access consistent snapshots without blocking writers, reducing contention and enabling serializable isolation in distributed settings, as seen in systems providing versioned data to handle concurrent updates. Conflict detection and resolution strategies, such as timestamp-based merging, can further address anomalies post-propagation.

Management and Complexity Issues

Managing multi-master replication systems introduces significant administrative overhead, primarily due to the need for continuous of processes and . Administrators must employ specialized tools to track , which can result in extended during events if updates are applied serially to replicas. Conflict detection mechanisms are essential to identify concurrent writes that could lead to aborts and limitations, as multiple processing overlapping transactions increases the risk of inconsistencies. Additionally, node health is critical in environments prone to frequent failures, such as one per day across 200 processors, to ensure timely and prevent data divergence. Configuration challenges further complicate operations in multi-master setups, requiring meticulous alignment of database schemas across all nodes to avoid errors in distributed queries that span multiple replicas. Security management adds complexity, as middleware layers often interfere with authentication protocols, making it difficult to enforce consistent controls without compromising replication efficiency. Handling schema changes demands careful coordination, such as updating triggers for writeset extraction or using two-phase commits for DDL operations, to maintain replication integrity without disrupting ongoing synchronization. These tasks are particularly arduous in asynchronous models, where temporary inconsistencies from delayed updates heighten the risk of misalignment. The cost implications of multi-master replication are substantial, stemming from elevated across multiple writable nodes and the need for advanced infrastructure. For example, certain replication strategies, like execution on all replicas, lead to inefficient resource utilization by redundantly processing read operations, thereby increasing hardware and operational expenses. efforts are resource-intensive, often necessitating full system recovery due to the lack of standardized for querying states, which amplifies costs in large-scale deployments. These factors contribute to higher overall infrastructure demands compared to single-master architectures. Operating multi-master systems demands specialized expertise in distributed systems, far exceeding the skills required for simpler single-master operations, including proficiency in tuning group communication protocols and managing wide-area network latencies. Administrators must navigate complex recovery protocols and strategies, such as centralized sequencers or hybrid models that dynamically adjust to workload conditions, to mitigate operational risks. This elevated knowledge barrier often exacerbates management challenges, particularly when consistency trade-offs from asynchronous replication introduce additional layers of oversight.

Implementations in Directory Services

Microsoft Active Directory

Microsoft (AD) implements multi-master replication to enable multiple s to accept and propagate directory updates independently, ensuring data consistency across distributed environments. This model, introduced with Windows 2000 Server, allows changes made on any to replicate to all others, supporting global enterprises by facilitating scalable, fault-tolerant directory services without a for most operations. The replication topology is managed by the Knowledge Consistency Checker (KCC), an automated process running on each that generates and maintains connection objects for efficient data flow. The KCC creates intrasite topologies as bidirectional rings for rapid and intersite topologies as spanning trees to minimize WAN traffic, dynamically adjusting for additions or failures of domain controllers. Changes are tracked using Update Sequence Numbers (USNs), monotonically increasing integers assigned to each update on a domain controller, which allow replicas to identify and request only new or modified data during . Synchronization occurs in a pull-based manner among domain controllers, with intrasite replication happening frequently via RPC over for uncompressed transfers to ensure low-latency updates within local networks. Intersite replication, optimized for wide-area networks, uses site links—logical connections between sites configured with costs, schedules, and intervals—to compress data and route changes through designated servers, reducing usage in geographically dispersed deployments. By default, site links are transitive, enabling indirect paths for replication across multiple sites without manual configuration of every pair. Conflicts arising from simultaneous updates are resolved at the attribute level using a combination of USNs, , and precedence rules defined in the . When replicating, the version with the highest USN prevails; if USNs tie, the latest determines the winner, while schema-based precedence ensures critical attributes (e.g., security identifiers) override others in multi-valued scenarios. This mechanism promotes , where all domain controllers converge to the same state after propagation completes.

OpenLDAP

OpenLDAP implements multi-master replication through the syncprov overlay, which enables multiple directory servers to act as providers that accept write operations and synchronize changes among themselves. This overlay is applied to backends such as back-mdb or back-bdb, configuring the server to track and propagate modifications using the syncrepl protocol. The syncrepl protocol facilitates both pull-based , where consumers request updates from providers, and push-like notifications in persistent modes, ensuring changes are replicated across the cluster. The synchronization process follows a provider-consumer model, where each master server functions as both a provider of its local changes and a consumer of updates from peers. In refreshOnly mode, consumers periodically poll providers for changes using synchronization cookies to resume from the last known state, typically configured with intervals like 24 hours for initial setups. For real-time replication, refreshAndPersist mode combines an initial refresh with ongoing persistent searches, allowing providers to notify consumers of modifications as they occur without requiring consumer-initiated pulls. This approach supports various topologies, including N-way multi-master configurations, and relies on LDAP Sync informational messages to convey add, modify, and delete operations. Conflict in multi-master replication occurs at the entry level, leveraging entryUUIDs for unique identification and contextCSN (Change Sequence Numbers) for ordering updates based on timestamps with precision. When concurrent modifications to the same entry arrive, the system merges attributes by retaining the version with the most recent contextCSN, ensuring across replicas; deletes are handled via a two-phase present-delete to avoid premature removal. While the core is built-in, administrators can extend through custom overlays or scripts for domain-specific logic, though standard setups prioritize timestamp-based merging to minimize administrative overhead. Multi-master support evolved significantly in 2.4 and later versions, replacing the deprecated slurpd daemon with the integrated syncrepl engine for self-synchronizing, order-independent updates that enhance scalability in large directories. Features like configurable checkpoints (e.g., every 100 operations or 10 minutes) and session logs (e.g., buffering 100 operations) optimize performance by reducing network overhead and enabling efficient resumption after failures. These improvements allow to handle enterprise-scale deployments with thousands of entries without requiring provider restarts for new replicas, marking a shift toward more robust, high-availability directory services.

Implementations in Relational Databases

MySQL and MariaDB

Multi-master replication in and is primarily implemented through Galera Cluster, a synchronous solution that enables writes to any in the cluster while ensuring data consistency across all members. Galera Cluster integrates with both (starting from version 5.5) and via the wsrep provider library, transforming the standard single-primary replication into a virtually synchronous multi-master setup. This implementation relies on certification-based replication, where transactions are executed locally on a node and then certified for replication to the group, allowing for active-active topologies without manual procedures. The synchronization process in Galera Cluster uses the wsrep (write-set replication) protocol, a generic that interfaces between the and the Galera replication library. When a commits on one , its write set is broadcast to the via a group communication system (GCS) framework, which ensures total ordering of transactions using plugins like or UDP-based protocols for reliable . This enables parallel applying of certified transactions on receiving nodes, configurable via the wsrep_slave_threads parameter, which can be set to match the number of CPU cores (typically 0 or for dynamic adjustment) to optimize throughput while maintaining . The process supports flow control to prevent overload, throttling replication if a node's apply queue exceeds thresholds. Conflict resolution employs an optimistic locking mechanism, where transactions proceed without initial locking across nodes, relying on certification at commit time. Each transaction is assigned a sequence ID (seqno) by the group communication layer to enforce a global order, and the write set is certified against the current database state using row hashing or keyset comparisons. If a is detected—such as concurrent modifications to the same rows—the certification fails, and the transaction is aborted with a error ( 1213), requiring application-level retries. This approach minimizes locking overhead but can lead to higher retry rates under high contention, and it assumes primary keys on all tables for efficient certification. Galera Cluster enhances failover capabilities with support for Global Transaction Identifiers (GTIDs), introduced in 5.6 and fully integrated in later versions through the wsrep_gtid_mode variable, which ensures consistent GTID assignment across the cluster for seamless recovery and replication tracking. This feature simplifies administration by allowing positionless replication setups and is particularly useful for integrating Galera with asynchronous slaves. Widely adopted since 5.5 for its stability in production environments, Galera provides quorum-based membership to avoid scenarios, requiring a of nodes (N/2 + 1) for write operations.

PostgreSQL

PostgreSQL does not provide native multi-master replication but achieves it through extensions built on its core replication features, including physical streaming replication via write-ahead log (WAL) shipping and logical replication using a publish-subscribe model. Physical streaming replication primarily supports primary-standby setups for , where changes are synchronously or asynchronously applied from a primary to standbys, but extensions enable bidirectional for multi-master scenarios. Tools like pgEdge and EDB BDR (Bi-Directional Replication) extend these capabilities to support true multi-master replication across multiple nodes. pgEdge leverages logical replication through the extension, enabling bi-directional, real-time data synchronization across distributed nodes, where each can act as both publisher and subscriber. In pgEdge, changes are decoded from WAL into logical format and applied via a delta-apply , supporting granular replication at the row, column, or table level. EDB BDR provides synchronous and asynchronous multi-master replication using logical replication, allowing writes on any with automatic propagation and support for up to 48 nodes in active-active configurations. It uses a for commit ordering and integrates with 's WAL for change capture. Conflict resolution in multi-master setups is handled by these tools, as core logical replication assumes read-only subscribers to avoid issues. pgEdge uses timestamp-based resolution to prioritize the most recent update and logs conflicts in a dedicated for auditing, with the delta-apply method minimizing conflicts by applying incremental changes rather than full row replacements. EDB BDR employs flexible resolution strategies, including last-update-wins, custom functions, or session-based rules, ensuring convergence while supporting application-level handling for complex cases. These approaches allow for flexible handling but require careful design to prevent , typically favoring application-level strategies over strict guarantees across nodes. Recent developments include pgEdge's evolution since 2023, with the platform achieving full open-source status in 2025 and introducing enhancements like the Constellation release (v24.7, 2024) for improved throughput via parallel logical replication, and support for workloads at the edge through low-latency, multi-region replication. This enables distributed clusters to handle real-time by scaling active-active nodes across locations while maintaining data residency compliance.

Oracle Database

Oracle Database provides multi-master replication primarily through the standalone Oracle GoldenGate solution, enabling update-anywhere scenarios across multiple master sites for high availability and load distribution in enterprise environments; the legacy Advanced Replication feature (deprecated since 12c and desupported in 12.2) is no longer recommended. Oracle GoldenGate excels in heterogeneous multi-master replication, allowing bi-directional synchronization between Oracle databases and non-Oracle systems like MySQL or SQL Server, and remains the preferred method as of Oracle AI Database 26ai (2025). Oracle GoldenGate implements multi-master replication via extract processes for capture and replicat processes for apply, using trail files to store and transport transactional data (DML and DDL) in a bi-directional manner across sites. In its Microservices Architecture, it supports fine-grained replication with system-managed sharding, where all shards remain writable and partially replicable within groups, ideal for active-active configurations. Conflict detection and resolution occur manually through the RESOLVECONFLICT clause in MAP parameters, employing user-defined procedural routines for custom logic or automatic discard of conflicting operations; automatic conflict detection and resolution (CDR) is available for Oracle-to-Oracle setups, using before images and timestamps to resolve updates non-invasively. GoldenGate facilitates global deployments with secure, low-latency replication over wide-area networks and enables zero-downtime upgrades by maintaining during database migrations or transitions, such as from 19c to 26ai, without interrupting application access. Oracle recommends GoldenGate for all new multi-master implementations due to its flexibility and support for modern cloud and hybrid environments.

Microsoft SQL Server

Multi-master replication in is primarily implemented through Peer-to-Peer Transactional Replication, a feature introduced in SQL Server 2005 that enables multiple instances to act as both publishers and subscribers, allowing concurrent writes across nodes while maintaining data consistency. This approach builds on the foundation of transactional replication to provide scale-out and high-availability solutions, particularly suited for distributed environments where load balancing and are essential, assuming non-overlapping updates to avoid conflicts. Additionally, Always On Availability Groups, available since SQL Server 2012, support active-active configurations in read-scale scenarios, where secondary replicas can handle read workloads, though writes are directed to the primary for synchronization. The synchronization process in replication relies on transactional mechanisms, where changes are captured via the and propagated using that move transactions from the distributor to each peer node in near . For initial synchronization, a snapshot is generated by the , which creates a copy of the and , applied to new peers via the to ensure all nodes start from the same state before ongoing transactional updates begin. This snapshot-based initialization minimizes and supports topologies with up to 20 nodes, though it requires identical across participants to avoid conflicts during . Conflict detection in Peer-to-Peer setups (enabled since SQL Server 2016) identifies concurrent modifications across nodes and logs them for manual or application-level resolution, without built-in automatic resolution; it is designed for partitioned data where updates do not overlap, and detected conflicts generate alerts that may pause replication until addressed. For scenarios requiring automatic conflict resolution with overlapping updates, Merge Replication can be used instead, employing priority-based logic by default, where the higher-priority site wins, or custom resolvers via components or handlers such as timestamps or data merging. The Merge Agent detects conflicts during synchronization by comparing row versions and applies resolutions immediately, logging unresolved cases for manual intervention if configured. Key features of SQL Server's multi-master replication include support for bidirectional synchronization in cloud environments, such as integrating on-premises instances with SQL Database via transactional replication, enabling seamless data flow across hybrid setups. topologies require all nodes to run SQL Server Enterprise Edition and use unique node IDs for routing, optimizing performance in scenarios like global data distribution while enforcing partition-aware publications to prevent cross-node conflicts.

Implementations in NoSQL Databases

Apache CouchDB

implements multi-master replication as a core feature through its HTTP-based Couch Replication Protocol (CRP), which enables bidirectional synchronization between multiple nodes without a central master. This protocol, introduced since the project's inception as an Apache top-level project in , allows databases to replicate changes incrementally in either direction, supporting setups where any node can accept writes and propagate them to others. The replication process is initiated via HTTP requests, such as POST to /_replicate, specifying source and target databases, and can be configured as one-way or mutual for multi-master scenarios. The mechanism relies on a filter-based, continuous replication model that uses update sequence IDs (update_seq) to track changes efficiently. CouchDB employs changes feeds to detect modifications since the last checkpoint, batching and transferring only the latest revisions of , including deletions, while skipping unchanged or already-present items. Filters, defined via functions or selectors, allow selective replication of based on criteria like document type or roles, ensuring targeted sync in large-scale deployments. Continuous keeps replication active, polling for new changes at intervals, which facilitates propagation in networks. Conflict resolution in CouchDB leverages automatic revision trees, where each maintains a (DAG) of revisions to preserve history and detect divergences during replication. When concurrent updates occur on different nodes, replication preserves all conflicting branches rather than overwriting, marking them accessible via the _conflicts query parameter. By default, CouchDB selects a deterministic winner based on revision ordering, but applications handle merging at the level by fetching all open revisions with ?open_revs=all, resolving conflicts logically (e.g., via timestamps or custom rules), and submitting the merged version through _bulk_docs while pruning losing branches. This approach avoids but requires developer intervention for complex merges, supporting CouchDB's emphasis on in distributed environments. Designed primarily for offline-first applications, CouchDB's replication enables seamless data flow from servers to edge devices, such as mobile apps using companion libraries like PouchDB, allowing local writes during disconnection and automatic upon reconnection. This P2P-capable scales horizontally across clusters, accommodating high-availability setups without single points of failure, and has been foundational for use cases like collaborative tools and data syncing since 2008.

Amazon DynamoDB

Amazon DynamoDB implements multi-master replication through its global tables feature, which enables active-active replication across multiple AWS Regions. This setup allows applications to perform read and write operations on any replica table in the chosen Regions without requiring manual or application modifications, leveraging the standard DynamoDB for seamless integration. Global tables ensure that data is automatically synchronized between replicas, providing low-latency access for globally distributed users while maintaining high durability. The synchronization process in DynamoDB global tables relies on DynamoDB Streams to capture item-level changes made to a in one and propagate them to replicas in other Regions. When an update occurs on any replica, the stream records the change, and DynamoDB's managed replication service applies it to all other replicas, typically within seconds, ensuring across the table. This stream-based approach supports multi-active writes, where each Region operates independently but converges to the same data state over time, monitored via CloudWatch metrics like ReplicationLatency. Developers can enable streams on the base table before converting it to a table, facilitating efficient without additional infrastructure. For conflict resolution, DynamoDB global tables employ a last-writer-wins strategy by default, where concurrent updates to the same item across Regions are reconciled based on the latest internal timestamp, ensuring all replicas eventually agree on a single version. This method prioritizes simplicity and performance in multi-master scenarios, though it may overwrite valid changes in high-contention cases. For more sophisticated needs, developers can implement application-level custom resolution logic using DynamoDB Streams to trigger AWS Lambda functions that process and merge conflicting updates before replication. Launched in November 2017, the global tables feature has become a for building resilient, multi-Region applications on DynamoDB, offering 99.999% across supported Regions to support business continuity and disaster recovery with minimal recovery point objectives. This serverless, managed replication eliminates the need for custom provisioning, allowing focus on application logic while scaling to handle global workloads efficiently.

MongoDB

MongoDB implements replication through replica sets, which provide and data redundancy in a primary-secondary . In a replica set, one primary node handles all write operations, while secondary nodes asynchronously replicate data from the primary to maintain . Primary election occurs automatically using a consensus protocol when the current primary becomes unavailable, ensuring within seconds. This leader-follower model avoids true multi-master writes within a single replica set, as only the primary accepts writes to prevent conflicts. To achieve scalability resembling multi-master replication, extends sets via , distributing data across multiple where each operates as an independent set with its own primary. Query routers (mongos) direct writes to the appropriate based on a shard key, allowing concurrent writes to different data partitions across multiple primaries. This setup enables horizontal scaling for large datasets and high-throughput workloads, with each maintaining its own replication for . Sharded clusters require config servers (also a set) to manage and a balancer to evenly distribute chunks of data. Data synchronization in replica sets relies on the oplog (operations log), a capped collection that records all write operations performed on the primary. Secondary nodes tail the oplog of a source member—typically the primary or another secondary—applying operations in using multiple threads while preserving the original order. This asynchronous process minimizes but can result in temporary inconsistencies during reads from secondaries. Initial sync for new members involves copying the full dataset and applying oplog changes, using either a logical or copy method. Conflict resolution is inherently limited in MongoDB's design, as the single-primary model per replica set (or per shard in sharded setups) prevents concurrent writes to the same document. Instead, MongoDB uses write concern configurations to ensure durability and reduce the risk of lost writes; for example, { w: "majority" } requires acknowledgment from a majority of data-bearing voting members before considering the operation successful, providing resilience against minority failures. There is no built-in automatic merging of conflicting updates, as the system relies on application-level handling for any rollbacks during failover or network partitions. In sharded environments, transactions across shards (supported since version 4.0) use two-phase commits to maintain atomicity, further mitigating potential inconsistencies. MongoDB supports multi-data center deployments by distributing replica set members across geographic locations, enhancing against regional outages. For instance, a three-member replica set can place the primary in one with secondaries in others, using settings to influence elections and tags for read preferences. Since 3.6 (released in ), change streams provide a real-time for applications to subscribe to data changes via the oplog, facilitating event-driven architectures in distributed setups without polling. This feature leverages the replication infrastructure to deliver ordered events for inserts, updates, and deletes across the cluster.

CockroachDB

CockroachDB, launched in 2015, is a cloud-native database that implements multi-master replication through native multi-active clusters, enabling all nodes to handle reads and writes simultaneously while maintaining . This design leverages the consensus algorithm to replicate data across regions, where the database is divided into key ranges, each forming a group with a replication factor typically set to three for . Each range elects a leader via to coordinate writes, requiring a quorum (e.g., two out of three replicas) to commit changes, ensuring of up to (replication factor - 1)/2 failures. The system supports horizontal scaling by automatically rebalancing ranges across nodes using efficient snapshots transferred over when clusters expand or contract. Synchronization in occurs through distributed transactions that achieve serializable isolation using multi-version concurrency control (MVCC). Transactions are timestamped and coordinated by a leaseholder (often the Raft leader), which proposes writes to the log; these logs are then replayed on follower replicas to maintain consistency. MVCC allows multiple versions of data to coexist, enabling reads without blocking writes and supporting atomic commits across ranges via a integrated with . This process ensures that all replicas remain synchronized, with nodes recovering by replaying committed log entries upon rejoining the cluster. Conflicts are resolved automatically through transaction ordering and aborts, eliminating the need for manual intervention or merges. Timestamps enforce a total order on transactions, detecting serialization anomalies during the commit phase; conflicting transactions are aborted and retried, while MVCC write intents track ongoing updates to prevent dirty reads. This leader-coordinated approach, combined with Raft's consensus, prioritizes consistency (CP in CAP terms) over availability during network partitions, ensuring no divergent states across masters. CockroachDB emphasizes horizontal scalability and compatibility with the PostgreSQL wire protocol (version 3.0), allowing seamless integration with existing PostgreSQL tools and drivers without major application rewrites. In 2025, version 25.2 introduced enhancements like vector indexing for AI workloads and over 41% performance gains in distributed queries, supporting scalable data pipelines for AI object storage and metadata management.

References

  1. [1]
    2 Master Replication Concepts and Architecture - Oracle Help Center
    This chapter explains the concepts and architecture of Oracle's master replication sites in both single master and multimaster replication environments.
  2. [2]
    EDB Replication Server v7 - Multi-master replication - EDB Docs
    As an alternative to the single-master (primary-to-secondary) replication model, Replication Server supports multi-master replication.Missing: documentation | Show results with:documentation
  3. [3]
    Comparison of single-master and multi-master replication v7
    Multi-master replication (MMR). Two or more databases are designated in which tables with the same table definitions and initial row sets are created. Changes ( ...
  4. [4]
    18: 26.1. Comparison of Different Solutions - PostgreSQL
    In synchronous multimaster replication, each server can accept write requests, and modified data is transmitted from the original server to every other server ...
  5. [5]
    Concepts of Multi-Master Replication - Oracle Help Center
    Multi-master replication uses a loose consistency replication model. This means that the same entries may be modified simultaneously on different servers.
  6. [6]
    Comparison Of Replication Strategies On Distributed Database ...
    Prototype testing of the distributed database showed that single-master replication outperformed multi-master replication in terms of memory usage, CPU usage, ...
  7. [7]
    [PDF] Dynamo: Amazon's Highly Available Key-value Store
    Dynamo: Amazon's Highly Available Key-value Store. Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati,. Avinash Lakshman, Alex Pilchin ...
  8. [8]
    Replication Model - an overview | ScienceDirect Topics
    A hybrid replication model can employ any mixture of full and limited replication partnerships, driven by the contingencies of the network topology.
  9. [9]
    Active-Active Multi-Master design pattern for global applications
    For applications that have to be run in multiple regions, you can adopt the Active-Active Multi-Master pattern, where you set up two clusters in two ...Overview · Multi-Master · Failover<|control11|><|separator|>
  10. [10]
    Documentation: 18: 28.3. Write-Ahead Logging (WAL) - PostgreSQL
    Write-Ahead Logging (WAL) ensures data integrity by logging changes before writing them to data files, enabling recovery after a crash.
  11. [11]
    MySQL 8.4 Reference Manual :: 7.4.4 The Binary Log
    For replication, the binary log on a replication source server provides a record of the data changes to be sent to replicas. · Certain data recovery operations ...Binlog-row-metadata · Setting The Binary Log Format
  12. [12]
    Understanding Multi-Leader Replication for Distributed Data - DZone
    Dec 5, 2024 · Scalability: The ring topology is generally more scalable than a full mesh, where every additional node greatly increases the number of required ...
  13. [13]
    Fully Meshed Multi-Master Topology - Oracle Help Center
    In a fully meshed multi-master topology, each master is connected to each other, providing high availability and data integrity. Each master maintains a change ...Missing: ring hub- spoke<|control11|><|separator|>
  14. [14]
    Topologies for linking collectives to implement multi-master replication
    The hub acts as a point of reconciliation for collisions. In an environment with a high update rate, the hub might require run on more hardware than the spokes ...Missing: mesh | Show results with:mesh
  15. [15]
    Database Replication in System Design - GeeksforGeeks
    Aug 8, 2025 · A database replication technique called semi-synchronous replication combines elements of synchronous and asynchronous replication. While other ...<|control11|><|separator|>
  16. [16]
    [PDF] Detection of Mutual Inconsistency in Distributed Systems
    This paper shows that in this case mutual inconsistency can be efficiently detected through the use of what we call version vectors and origin points. Once ...
  17. [17]
    [PDF] Virtual time and global states of distributed systems
    Distributed systems lack a common time base, causing problems. Virtual time, using logical clocks, can simplify design of distributed algorithms.
  18. [18]
  19. [19]
    [PDF] Brewer's Conjecture and the Feasibility of
    Brewer's Conjecture and the Feasibility of. Consistent, Available, Partition-Tolerant Web. Services. Seth Gilbert*. Nancy Lynch*. Abstract. When designing ...
  20. [20]
    The Significance of Multi-Master Replication in PostgreSQL - pgEdge
    Aug 8, 2023 · pgEdge multi-master replication is a configuration that allows multiple database nodes to operate in active-active mode, allowing read and write operations on ...
  21. [21]
    Case Study: Multi-Primary MySQL for e-Commerce Sites - Continuent
    Mar 31, 2020 · Multi-master replication for MySQL typically means that a user can write to any master node knowing that the write will be eventually consistent ...
  22. [22]
    Amazon DynamoDB Global Tables - AWS
    Global tables provide you up to 99.999% availability, increased application resiliency, and improved business continuity. As global tables replicate your tables ...
  23. [23]
    Performance Evaluation: MySQL 5.7 Group Replication (GA Release)
    Dec 12, 2016 · Group Replication delivers around 2x the maximum throughput of Galera and around 3x when durability is used. 2.2 Throughput varying the number ...<|control11|><|separator|>
  24. [24]
  25. [25]
    [PDF] Lazy Database Replication with Snapshot Isolation
    Under SI, two concurrent update transactions have a write-write conflict if both transactions update at least one common data item. For example, consider a ...
  26. [26]
    An Overview of Deterministic Database Systems
    Sep 1, 2018 · By reducing the cost of replication, they facilitate higher replica consistency levels (albeit constrained by CAP trade-offs) and provide a high ...
  27. [27]
    [PDF] Middleware-based Database Replication: The Gaps Between ... - arXiv
    4.3 Middleware-level Challenges​​ Middleware-based replication uses a middleware layer between the application and the database engines to implement replication, ...
  28. [28]
    [PDF] A Proposal for a Multi-Master Synchronous Replication System
    Jan 12, 2006 · This document describes the design of the Slony Π replication system for PostgreSQL, the status of the Slony Π prototype, and outstanding issues ...
  29. [29]
    [PDF] Empirical Insights into Replication Models for Distributed Database ...
    Their findings showed that hybrid approaches can greatly improve system resilience and performance, especially in contexts with unpredictable workloads. Figure ...
  30. [30]
    [PDF] Multi-master Transaction Processing on Semi-leader Architecture
    ABSTRACT. Multi-master architecture is desirable for cloud databases in sup- porting large-scale transaction processing. To enable concurrent.
  31. [31]
    Features of the Replication Model for Active Directory Domain ...
    Aug 23, 2019 · In this model, the directory can have many replicas; a replication system propagates changes made at any given replica to all other replicas.
  32. [32]
    Planning Operations Master Role Placement | Microsoft Learn
    May 12, 2025 · Active Directory Domain Services (AD DS) supports multimaster replication of directory data, which means any domain controller can accept ...
  33. [33]
    Active Directory Replication Concepts | Microsoft Learn
    May 12, 2025 · The Active Directory replication topology most commonly deployed in this scenario is based on a hub-and-spoke design, where branch domain ...
  34. [34]
    Site functions | Microsoft Learn
    May 12, 2025 · Active Directory Domain Services (AD DS) uses a multimaster, store-and-forward method of replication. A domain controller communicates ...Routing Replication · Client Affinity · Sysvol Replication
  35. [35]
    Transfer or seize Operation Master roles - Windows Server
    Apr 7, 2025 · The Active Directory Replication Engine resolves any potentially conflicting changes. For more information, see Resolving conflicting changes. ...
  36. [36]
    OpenLDAP Software 2.4 Administrator's Guide: Replication
    MirrorMode is a hybrid configuration that provides all of the consistency guarantees of single-provider replication, while also providing the high availability ...
  37. [37]
    OpenLDAP Software 2.5 Administrator's Guide: Replication
    Multi-Master replication is a replication technique using Syncrepl to replicate data to multiple provider ("Master") Directory servers. 18.2.2.1. Valid ...Missing: documentation | Show results with:documentation
  38. [38]
    Introduction to Galera Architecture | MariaDB Documentation
    Oct 24, 2025 · The wsrep API is a generic replication plugin interface for databases. It defines a set of application callbacks and replication plugin calls.
  39. [39]
  40. [40]
    What is Galera Replication? | Galera Cluster | MariaDB Documentation
    Jul 31, 2025 · The wsrep API (write set replication API) defines the interface between Galera replication and MariaDB. Synchronous vs. Asynchronous Replication.
  41. [41]
    MySQL wsrep Options — Galera Cluster Documentation
    When the node loads the wsrep Provider, there are several configuration options available that affect how it handles certain events. These allow you to fine ...
  42. [42]
    Certification-Based Replication | Galera Cluster - MariaDB
    Sep 4, 2025 · The certification test fails if it detects a conflict. The procedure is deterministic and all replicas receive transactions in the same order.
  43. [43]
    Multi-Primary Conflicts — Galera Cluster Documentation
    Oct 7, 2019 · Galera Cluster copes with situations such as this by using certification-based replication. Troubleshooting. There are a few techniques ...
  44. [44]
    Galera Cluster System Variables | MariaDB Documentation - MariaDB
    Oct 14, 2025 · This page documents system variables related to Galera Cluster. For options that are not system variables, see Galera Options.
  45. [45]
    Chapter 26. High Availability, Load Balancing, and Replication
    Database servers can work together to allow a second server to take over quickly if the primary server fails (high availability), or to allow several computers ...Different replication solutions · 26.2. Log-Shipping Standby... · 26.3. Failover<|control11|><|separator|>
  46. [46]
    Documentation: 18: Chapter 29. Logical Replication - PostgreSQL
    Logical replication is a method of replicating data objects and their changes, based upon their replication identity (usually a primary key).29.1. Publication · 29.2. Subscription · 29.4. Row Filters · 29.9. Architecture
  47. [47]
    Bucardo - PostgreSQL wiki
    Jan 3, 2022 · Multimaster replication uses two or more databases, with conflict resolution (either standard choices or custom subroutines) to handle the same ...Contacts · General Information · Clustering model · Use-case
  48. [48]
    Multi-Master Distributed Postgres from pgEdge
    Multi-master distributed PostgreSQL allows you to have multiple master databases spread across different locations (multiple nodes), each capable of handling ...
  49. [49]
    Multi-Master Replication Solutions for PostgreSQL - Percona
    Jun 9, 2020 · PostgreSQL has built-in single-master replication, but unfortunately, there is no multiple-master replication in mainline PostgreSQL. There are ...
  50. [50]
    pgEdge Distributed PostgreSQL Now Available on Akamai Cloud
    Jun 18, 2025 · Scale AI workloads efficiently across regions. One of the most compelling use cases for pgEdge on Akamai Cloud is enabling AI at the edge.
  51. [51]
    Scaling AI Inference at the Edge - pgEdge
    Feb 19, 2025 · As AI applications increasingly demand localized, real-time processing, pgEdge empowers organizations to meet these challenges with a flexible, ...
  52. [52]
    Getting Started with Oracle GoldenGate
    Support system managed database sharding to deliver fine-grained, multi-master replication where all shards are writable, and each shard can be partially ...
  53. [53]
  54. [54]
    Manual Conflict Detection and Resolution - Oracle Help Center
    Conflict detection and resolution is required in active-active configurations, where Oracle GoldenGate must maintain data synchronization among multiple ...Missing: master | Show results with:master
  55. [55]
    [PDF] Maximizing Availability with Oracle Database
    This paper introduces the concepts of Oracle Maximum Availability Architecture, including the high availability features and disaster recovery features utilized ...
  56. [56]
    Oracle Database 19c - High Availability
    This guide includes several database scenarios such as creating, recovering ... Provides dynamic load balancing, failover, and centralized service management for ...
  57. [57]
    Peer-to-Peer Transactional Replication - SQL Server | Microsoft Learn
    Aug 21, 2025 · Peer-to-peer replication provides a scale-out and high-availability solution by maintaining copies of data across multiple server instances, also referred to ...Peer-to-Peer Topologies · Configuring Peer-to-Peer...
  58. [58]
    What is an Always On availability group? - SQL Server Always On
    Oct 10, 2025 · Two types of availability replicas exist: a single primary replica, which hosts the primary databases, and one to eight secondary replicas, each ...Availability Modes · Basic · Prerequisites, Restrictions...
  59. [59]
    Transactional Replication - SQL Server - Microsoft Learn
    Aug 21, 2025 · The Snapshot Agent prepares snapshot files containing schema and data of published tables and database objects, stores the files in the snapshot ...Overview · Configure Tls 1.3 Encryption · Modifying Data And The Log...
  60. [60]
    Advanced conflict detection & resolution (Merge) - SQL Server
    Sep 27, 2024 · Merge replication offers a variety of methods to detect and resolve conflicts. For most applications, the default method is appropriate.Conflict Detection · Conflict Resolution
  61. [61]
    How Merge Replication Detects and Resolves Conflicts - SQL Server
    Jul 21, 2025 · Conflicts are resolved automatically and immediately by the Merge Agent unless you have chosen interactive conflict resolution for the article.
  62. [62]
    Replication to Azure SQL Database - Microsoft Learn
    May 9, 2025 · This article describes the use of transactional replication to push data to Azure SQL Database or Fabric SQL database.
  63. [63]
    Apache CouchDB
    The Couch Replication Protocol lets your data flow seamlessly between server clusters to mobile phones and web browsers, enabling a compelling offline-first ...CouchDB Bylaws · Code of Conduct · Fauxton Visual Guide · Current Releases
  64. [64]
    2.1. Introduction to Replication - CouchDB docs
    Replication involves a source and a destination database, which can be on the same or on different CouchDB instances.
  65. [65]
    2.3. Replication and conflict model - CouchDB docs
    When working on a single node, CouchDB will avoid creating conflicting revisions by returning a 409 Conflict error. This is because, when you PUT a new version ...
  66. [66]
  67. [67]
    Global tables - multi-active, multi-Region replication
    DynamoDB global tables provide multi-Region, multi-active database replication for fast, localized performance and high availability in global applications.
  68. [68]
    Global tables: How it works - Amazon DynamoDB
    A replica table (or replica, for short) is a single DynamoDB table that functions as a part of a global table. Each replica stores the same set of data items.
  69. [69]
    How DynamoDB global tables work - AWS Documentation
    An MREC global table can have a replica in any Region where DynamoDB is available, and can have as many replicas as there are Regions in the AWS partition.
  70. [70]
    AWS Launches Amazon DynamoDB Global Tables
    Nov 29, 2017 · Global Tables is available today in five regions: US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland, and EU (Frankfurt). To ...
  71. [71]
  72. [72]
  73. [73]
    Replica Set Members - Database Manual - MongoDB Docs
    Understand the roles and configurations of primary, secondary, and arbiter members in a MongoDB replica set for redundancy and high availability.
  74. [74]
    Sharding - Database Manual - MongoDB Docs
    Sharding is a method for distributing data across multiple machines. MongoDB uses sharding to support deployments with very large data sets and high throughput ...Hashed Sharding · Ranged Sharding · Sharding Reference · Glossary
  75. [75]
  76. [76]
    Replica Set Data Synchronization - Database Manual - MongoDB
    Understand how secondary members in a replica set synchronize data from a source member using initial sync and ongoing replication processes.
  77. [77]
  78. [78]
    Write Concern - Database Manual - MongoDB Docs
    ### Summary of Write Concern in MongoDB
  79. [79]
    Replica Sets Distributed Across Two or More Data Centers - MongoDB
    Distribute replica set members across multiple data centers to enhance redundancy and fault tolerance against data center failures.
  80. [80]
    Change Streams - Database Manual - MongoDB Docs
    Change Streams with Document Pre- and Post-Images. Starting in MongoDB 6.0, you can use change stream events to output the version of a document before and ...
  81. [81]
    Exclusive: IBM tightens partnership with Cockroach Labs to fuel ...
    Oct 7, 2025 · Cockroach Labs was formed in 2015. PostgreSQL compatibility enables developers to run applications without significant rewrites.Missing: pipelines | Show results with:pipelines
  82. [82]
    Multi-Active Availability - CockroachDB
    Multi-active availability is CockroachDB's version of high availability (keeping your application online in the face of partial failures).
  83. [83]
    Replication Layer - CockroachDB
    The replication layer of CockroachDB's architecture copies data between nodes and ensures consistency between these copies by implementing our consensus ...
  84. [84]
    [PDF] Leader or Majority: Why have one when you can have both ...
    The Replica layer proposes Raft commands for replica- tion. The MVCC and the RocksDB layer manage the underlying data, as key-value pairs. CockroachDB uses ...<|control11|><|separator|>
  85. [85]
    PostgreSQL Compatibility - CockroachDB
    CockroachDB is compatible with version 3.0 of the PostgreSQL wire protocol (pgwire) and works with the majority of PostgreSQL database tools such as DBeaver, ...Missing: 2015 2025 AI pipelines
  86. [86]
    CockroachDB 25.2: A Decade of Innovation Continues with Major ...
    Jun 3, 2025 · Delivers >41% performance gains, vector indexing for AI, enhanced security, and compliance-ready features for multi-cloud scale.
  87. [87]
    How to Build Scalable Metadata Management for AI Object Storage
    Oct 2, 2025 · Learn how CockroachDB powers scalable, consistent, and compliant metadata management for AI object storage—supporting high-concurrency, ...Missing: master replication