Fact-checked by Grok 2 weeks ago

Oracle RAC

Oracle Real Application Clusters (RAC) is an option for the that enables multiple server instances to access a single shared database simultaneously, allowing the database to function as a single logical unit while distributing workload across clustered nodes for enhanced and reliability. This architecture contrasts with traditional single-instance Oracle Databases, where only one instance manages the database at a time, by leveraging shared storage and interconnect technology to ensure data consistency and . At its core, Oracle RAC relies on Oracle Clusterware as the foundational infrastructure to manage the cluster, binding multiple servers so they operate as a unified system, along with Oracle Automatic Storage Management (ASM) for efficient shared storage handling. Key mechanisms include the Global Cache Service (GCS) and Global Enqueue Service (GES), which facilitate Cache Fusion to transfer data blocks between instances over a private interconnect, maintaining cache coherency without constant disk access. This setup eliminates single points of failure, supports mission-critical applications without requiring application code modifications, and provides scalability by adding or removing nodes dynamically. Oracle RAC delivers through features like automatic , transparent application continuity, and zero-downtime rolling , protecting against failures, software issues, and planned outages. It scales horizontally for diverse workloads, including (OLTP), analytics, AI, and complex enterprise applications such as , , and , achieving high throughput and ultra-fast response times. Widely adopted in sectors like banking, , retail, and , Oracle RAC ensures 24/7 operation for demanding, always-on environments.

Introduction

Definition and Core Concepts

Oracle Real Application Clusters (RAC) is Oracle's shared-everything clustering solution for the Oracle Database, enabling multiple server instances to operate against a single physical database for enhanced scalability and availability. Introduced in 2001 with Oracle Database 9i Release 1, RAC replaced the prior Oracle Parallel Server (OPS), which depended on disk-based locking mechanisms that limited performance in multi-node environments. This architecture allows a cluster of independent servers, or nodes, to function as a unified system, sharing access to all database resources without partitioning data across nodes. The core structure of Oracle RAC centers on shared for critical database components, including data files, control files, and redo logs, which are accessible to every via cluster-aware file systems or storage area . Each runs its own database instance, complete with dedicated memory structures such as the System Global Area () for caching data blocks and the Program Global Area () for session-specific operations, along with background processes for local management. Instances coordinate and synchronize through a private, high-speed interconnect , which facilitates communication for and data consistency. In contrast to a single-instance , where a solitary instance handles all database operations, RAC supports active-active scaling across nodes, distributing workload dynamically to improve throughput and without necessitating application code changes. This setup eliminates single points of failure, as the database remains operational on surviving nodes if one fails, with synchronization primarily managed via the Cache Fusion protocol for efficient block transfers between caches.

Primary Objectives and Benefits

Oracle Real Application Clusters (RAC) primarily aims to deliver high availability by tolerating node failures without interrupting database operations, enabling horizontal scalability through the addition of cluster nodes, and enhancing performance via parallel query processing across multiple instances. This architecture allows organizations to maintain continuous access to data even during hardware or software failures, supporting mission-critical applications that require uninterrupted service. Key benefits include fault isolation, where a failure on one node does not impact others, ensuring that workloads continue seamlessly on surviving instances; load balancing across nodes to optimize resource utilization and prevent bottlenecks; and integration with for robust , allowing rapid to a standby site while maintaining data consistency. These features shared for concurrent data access and Cache Fusion for low-latency block transfers between nodes, further minimizing disruptions. In modern versions, such as 23ai (released 2024), Oracle RAC supports clusters with over 100 nodes, achieving near-linear scalability without application modifications, and reduces downtime to under 15 seconds through fast mechanisms. It is particularly suited for (OLTP) systems demanding 24/7 uptime, such as banking and platforms, as well as large-scale analytics workloads that benefit from parallel execution without data partitioning.

Prerequisites

Hardware and Software Requirements

Oracle Real Application Clusters (RAC) requires robust hardware configurations to ensure and scalability across multiple nodes. Each node must utilize (SMP) servers or systems capable of supporting the cluster's workload, with a minimum of 8 GB for Oracle Infrastructure installations and at least 1 GB (recommended 2 GB or more) for the Oracle Database software. Shared storage is essential, typically implemented via (SAN) or (NAS) with multipath I/O for redundancy and performance, where Oracle Automatic Storage Management (ASM) is recommended for managing database files, control files, SPFILEs, redo logs, and recovery files. Networking demands at least one public network interface card (NIC) per node for client access and a dedicated private interconnect using high-speed Ethernet, with a minimum of 1 Gbps and recommendations for 10 Gbps or faster to handle Cache Fusion traffic efficiently. On the software side, Oracle RAC necessitates the Enterprise Edition of Oracle Database with the RAC option licensed, alongside Oracle Grid Infrastructure for cluster management, which must be installed in a separate Oracle home from the database software. For Oracle Database 21c and later (including the current 23ai release as of 2025), the multitenant architecture with a container database (CDB) is mandatory. Compatible operating systems include certified versions of (such as 8 and later, , and Server), (e.g., 2019, 2022, 2025), and on or platforms, ensuring identical OS configurations across all nodes to avoid compatibility issues. All hardware and software must comply with Oracle's certification matrices to guarantee and . The Oracle Hardware Compatibility List (via My Oracle Support) verifies compatible servers, storage, and network adapters from vendors like , , and /Sun, while the software certification matrix outlines supported OS patches and versions. Virtualization environments are supported, including , Oracle VM, KVM, and Microsoft Hyper-V, allowing RAC deployment on certified virtual machines with performance considerations for shared resources. The minimum consists of 2 nodes for basic , with extending to 16 or more physical nodes depending on the protocol—such as up to 30 nodes with over a private network—though practical limits are influenced by interconnect and .

Cluster Setup Essentials

Setting up an Oracle Real Application Clusters (RAC) environment requires meticulous planning to ensure , scalability, and performance. Initial planning steps involve conducting a capacity assessment to evaluate workload requirements, including operations per second () and throughput needs for the shared subsystem, which directly impacts database performance under concurrent access from multiple nodes. Redundancy must be incorporated into the shared design, typically using 1+0 configurations to provide both for and striping for balanced load distribution across disks. Additionally, integrating a strategy from the outset is essential, leveraging tools like Recovery Manager (RMAN) to support consistent backups of the shared database while accounting for cluster-wide synchronization. Storage in Oracle RAC centers on Automatic Storage Management (), which simplifies the management of shared disks by creating disk groups that span multiple for pooled storage resources. disk groups are used to store database files, redo logs, and control files, ensuring uniform access and automatic load balancing. Critical components include voting disks, which maintain membership by recording status—Oracle recommends configuring 3 to 5 voting disks in an disk group with normal or high redundancy to tolerate failures without loss. The Oracle Cluster Registry (OCR) stores configuration data and must be placed in a separate disk group or shared , with redundancy provided through mirroring to prevent single points of failure. Note that starting with 21c, third-party clusterware is no longer supported. Network zoning is fundamental to isolate traffic types and enhance and performance in Oracle RAC. The public network handles client connections and administrative access, requiring or faster links with low to support SQL*Net traffic. In contrast, the private interconnect is dedicated to internal communication, such as Cache Fusion for block transfers between instances, and should use a separate high-bandwidth, low-latency network like or to minimize contention. The private interconnect uses ports (e.g., dynamically assigned in high ranges for Cache Fusion, fixed ports like 1638 for Cluster Synchronization Services). To avoid interference, VLANs are employed for separation, ensuring the private interconnect operates in an isolated segment that prevents public traffic from impacting heartbeat or . Security basics in cluster setup focus on node authentication and controlled access to prevent unauthorized participation. The Oracle Advanced Security Option (ASO) enables strong authentication mechanisms, such as or PKI-based certificates, to verify node identities during cluster join operations. Firewall configurations must allow specific ports: port 1521 for SQL*Net listener connections from clients, and ensure the private interconnect's ports are open between nodes while blocking external access.

Architecture

Key Components

Oracle Clusterware serves as the foundational cluster management software for Oracle Real Application Clusters (RAC), providing a portable infrastructure that binds multiple servers to operate as a single logical system. It manages essential cluster resources, including nodes, virtual IP (VIP) addresses, database services, and listeners, ensuring high availability and automated failover. Key subcomponents include Cluster Ready Services (CRS), which oversees the lifecycle of cluster resources stored in the Oracle Cluster Registry (OCR); Cluster Synchronization Services (CSS), which maintains node membership and synchronizes cluster operations to prevent split-brain scenarios; and Event Management (EVM), which monitors and propagates cluster events for proactive management. These elements collectively enable seamless resource allocation and recovery across the cluster. Shared resources form the core layer in RAC, allowing multiple database instances to access the same physical simultaneously. These include data files, files, parameter files (SPFILEs), and redo log files, all residing on cluster-aware shared disks that meet prerequisites such as high-speed access from all nodes. Automatic (ASM) instances manage this shared storage, dynamically allocating disk groups and optimizing I/O performance for RAC environments. Oracle Net listeners, configured per node or as shared listeners, facilitate client connections, while the Single Client Access Name (SCAN) enhances scalability by resolving to three IP addresses for connection load balancing and failover across instances. Oracle RAC introduces specialized background processes to handle distributed operations, distinguishing it from single-instance databases. Multiple Lock Manager (LMS) processes per instance manage cache coordination, including the transfer of blocks between instances, which is critical for Cache Fusion's reliance on these processes. The Lock Manager Daemon (LMD) process oversees distributed lock management, processing remote enqueue requests, detecting deadlocks, and ensuring resource consistency across the . Additional processes like the Lock Monitor (LMON) support global enqueue monitoring and , while the Lock Element () handles cross-instance calls. These processes operate alongside standard database background processes to maintain and performance in a multi-node setup. Oracle Grid Infrastructure integrates Oracle Clusterware with , creating a unified platform for automated and management in RAC deployments. Installed on each , it simplifies administration by combining with disk provisioning, supporting up to 128 instances per database and enabling features like dynamic volume management. This integration reduces operational complexity, allowing administrators to treat the as a single entity for provisioning and maintenance tasks.

Cache Fusion Protocol

Cache Fusion is the core protocol in Real Application Clusters (RAC) that enables efficient synchronization of data blocks across multiple database instances by transferring them directly between buffer caches over the private cluster interconnect, thereby bypassing disk I/O and minimizing latency. This mechanism logically merges the buffer caches of all instances into a single, shared global cache, allowing instances to access data as if it were local without frequent disk contention. By shipping blocks in rather than reading from shared , Cache Fusion significantly reduces reader/writer conflicts and overall system overhead, enhancing scalability for high-throughput workloads. The protocol operates in two primary modes to handle read and write requests while maintaining data consistency. For read operations, Cache Fusion transfers consistent read (CR) blocks, which are versions of data blocks constructed to reflect a consistent view at the time of the request, avoiding the need for rollback application on the requesting instance. In write scenarios, it transfers current (dirty) blocks—those modified but not yet written to disk—accompanied by lock mode conversions, such as upgrading from shared (S) to exclusive (X) mode to ensure only one instance can modify the block at a time. These conversions prevent concurrent modifications and preserve block integrity across the cluster. The Global Cache Service (GCS), a key component integrated with , manages the modes and locations of data blocks to enforce cache coherency. GCS tracks blocks in modes including (no holder), shared (multiple readers), exclusive (single modifier), and S/SSX (shared with shared past image for undo reconstruction). It coordinates access privileges by monitoring resource states and facilitating block transfers via Lock Management Services (LMS) processes, which handle inter-instance messaging. Additionally, GCS employs mechanisms through these processes to continuously track resource availability and instance liveness, ensuring prompt detection of failures and reallocation of resources. Performance of Cache Fusion is evaluated using key metrics such as gc cr blocks received and gc cr blocks sent for consistent read transfers, and gc current blocks received and gc current blocks sent for current block transfers, which indicate the volume of inter-instance block traffic. These statistics, available in views like V$SYSSTAT, help identify bottlenecks in global activity. Fusion efficiency, often expressed as the ratio of CR blocks to total global cache blocks multiplied by 100, provides a measure of how effectively the avoids disk access, with higher percentages signaling optimal cache utilization. Cache Fusion was introduced in Oracle Database 9i as a groundbreaking feature to enable true shared-cache architecture in RAC, replacing earlier disk-based coordination methods. Subsequent enhancements, particularly from Oracle Database 19c onward, have incorporated support for Remote Direct Memory Access (RDMA) over protocols like RoCE, allowing direct memory-to-memory transfers with reduced CPU involvement and further latency improvements on compatible hardware.

Networking and Connectivity

Interconnect Design

The interconnect in Real Application Clusters (RAC) serves as a dedicated that provides a high-bandwidth, low-latency communication pathway between nodes, enabling efficient inter-instance data transfers via Cache Fusion, heartbeats for membership monitoring, and other essential traffic. This infrastructure is crucial for maintaining data consistency and across multiple database instances, treating the as a unified system. Common implementations utilize 10, 25, or (GbE) or to meet these performance demands. To ensure high availability, Oracle RAC employs redundancy through dual or multiple interconnect interfaces per node, configured without traditional bonding but leveraging High Availability IP (HAIP) for automatic failover and load balancing. HAIP dynamically assigns virtual IP addresses across available interfaces on different subnets, allowing seamless traffic redirection if a physical link or switch fails, thus preventing single points of failure. The protocol stack primarily uses UDP for its efficiency in handling the bursty, high-volume traffic patterns, with support for Reliable Datagram Sockets (RDS) over InfiniBand as an alternative. Bandwidth requirements for the interconnect start at a minimum of 1 Gb/s, but Oracle recommends 10 Gb/s or higher to support intensive workloads without bottlenecks. Latency must be kept low, ideally under 1 ms round-trip, to optimize Cache Fusion performance and minimize global cache waits. These specifications ensure that the interconnect functions effectively as a shared memory channel equivalent for the distributed database environment. Configuration of the interconnect involves assigning a private range exclusive to communication, separate from networks, to isolate and prioritize internal traffic. Enabling jumbo frames with a (MTU) of 9000 bytes is recommended across all components, including host adapters, drivers, and switches, to reduce overhead and improve throughput for large block transfers. Starting with 12c, integration with (RoCE) enhances performance by enabling kernel-bypass , particularly in environments supporting RDMA protocols like RDSv3.

Name Resolution and VIPs

In Oracle Real Application Clusters (RAC), name resolution for client connections occurs over the public network, where clients access database instances without needing to know specific node details. This setup relies on virtual IP addresses (VIPs) and domain name system (DNS) configurations to ensure seamless connectivity and high availability. Node VIPs are floating IP addresses hosted by Oracle Clusterware on the public network interface of each cluster node, allowing clients to connect directly to an instance via these addresses. By using VIPs, Oracle RAC achieves sub-second failover for client connections; if a node fails, the VIP migrates to another available node, preventing the typical TCP timeout delays of up to several minutes that would occur with static IP binding. To simplify client access and enable load balancing across the , Oracle RAC introduced the Single Client Access Name () in release 11g. SCAN is a DNS-based alias that resolves to three IP addresses, each associated with a SCAN VIP and listener, distributing incoming connections evenly without requiring clients to specify individual VIPs. These SCAN VIPs function similarly to VIPs but can relocate to any in the , providing a stable, single endpoint for clients regardless of additions or failures. The SCAN enhances , as it supports up to three addresses for redundancy and load distribution, and integrates with the cluster's public network zoning to maintain consistent access. For automated management of VIPs and hostnames in larger or dynamic environments, Oracle RAC introduced the Grid Naming Service (GNS) in release 11g Release 2 (11.2). GNS operates as a cluster-managed DNS service, using a static GNS VIP delegated by the external DNS to handle dynamic resolution of cluster resources, including node VIPs and addresses. This integration reduces administrative overhead by automatically updating DNS records for VIP migrations during node additions, removals, or failures, ensuring clients always resolve to active resources without manual intervention. Client applications connect to Oracle RAC using standard protocols like JDBC or ODBC, configured to target the name for transparent and . For instance, a JDBC might specify the and , allowing the driver to select an optimal instance based on preferences while receiving Fast Application Notification () events for real-time updates on cluster changes. , part of the Oracle RAC framework, notifies applications via APIs or callbacks about relocations or instance failures, enabling proactive reconnection without full session loss. This configuration supports ODBC drivers similarly, promoting workload balancing and rapid recovery in enterprise deployments.

Implementation and Operations

Deployment Process

The deployment of Real Application Clusters (RAC) begins with preparing the operating system on all cluster nodes to meet the necessary prerequisites, such as installing required packages, configuring user accounts like the grid and owners, setting parameters for and semaphores, and ensuring SSH equivalence for passwordless communication between nodes. These steps ensure compatibility and secure inter-node operations before proceeding to software . Next, Oracle Grid Infrastructure is deployed using the Oracle Universal Installer (OUI), which is run on the first in graphical or to install Oracle Clusterware and Oracle Automatic Storage Management () across the cluster. During the installation, OUI prompts for cluster selection, virtual IP addresses, and storage options; it then copies the software to remote s and requires execution of the root.sh script on each as the root user to configure system-level components like the Cluster Ready Services (CRS). Following Grid Infrastructure installation, disk groups are created using the ASM Configuration Assistant (ASMCA) or through OUI options to provision shared storage for the database files, voting disks, and OCR. The software is then installed in RAC mode using OUI in "software only" configuration, selecting the Enterprise Edition and specifying the nodes, with Oracle base and home directories distinct from the home. After software installation, the root.sh script is run on all nodes if not automated during OUI. To create the RAC database, the Database Configuration Assistant (DBCA) is invoked from the home, choosing the "Oracle Real Application Clusters database" template, specifying instances per node, and configuring parameters like and allocation to disk groups. DBCA automates the creation of database instances, services, and across nodes. Cluster resources, including databases and instances, are managed post-deployment using the Server Control (SRVCTL) , which allows starting, stopping, and monitoring RAC components from any node. For maintenance, Oracle RAC supports rolling upgrades through a two-stage process: first, upgrading the Grid Infrastructure software node-by-node while maintaining availability, followed by applying patches or database upgrades in batches to minimize . This approach leverages QUORUM-based voting disk management, where a of voting disks must remain accessible to sustain operations during the upgrade. Verification of the deployment ensures all components are operational; the Verification Utility (CLUVFY) is run with commands like cluvfy stage -post crsinst -n all to check post-installation status of Clusterware and interconnect. Additionally, crsctl check crs confirms the status of Ready Services, CSS, and EVM daemons on each node, while srvctl status database -d db_name verifies database instance availability across the cluster. These tools provide comprehensive diagnostics to validate the RAC environment before production use.

Management and Diagnostics

Oracle Real Application Clusters (RAC) management involves a suite of command-line utilities and repositories designed to administer cluster resources, instances, and services efficiently. The Server Control Utility (SRVCTL) serves as the primary interface for managing Oracle RAC databases, enabling administrators to start, stop, and relocate database instances, services, and other cluster resources from a centralized command line. For instance, commands like srvctl start database -d db_name initiate all instances of a specified database across the cluster, while srvctl stop instance -d db_name -i instance_name halts a specific instance without affecting others. Complementing SRVCTL, the Cluster Ready Services Control (CRSCTL) utility provides control over Oracle Clusterware components, allowing operations such as checking cluster status with crsctl check cluster -all, starting or stopping the entire cluster stack via crsctl start crs, or managing high-availability resources. These tools ensure seamless administration of multi-node environments by abstracting complex cluster interactions into simple, scriptable commands. The Grid Infrastructure Management Repository (GIMR) enhances diagnostics by storing cluster-wide operational data, including performance metrics, alerts, and historical logs from Oracle Clusterware and RAC components, in a dedicated multitenant database pluggable database (PDB) per cluster. Administrators access GIMR data through tools like or SQL queries to diagnose issues such as resource failures or configuration drifts, facilitating proactive maintenance without relying on . Monitoring in Oracle RAC leverages Automatic Workload Repository (AWR) and Automatic Database Diagnostic Monitor (ADDM) to capture and analyze cluster-specific performance data, with a focus on global (gc) wait events that indicate inter-node block transfer delays. AWR snapshots, generated hourly by default, collect statistics on gc cr block receive time (waits for current or consistent read blocks) and gc buffer busy waits (contention for cached blocks), enabling identification of interconnect bottlenecks or load imbalances across instances. ADDM processes these AWR snapshots to produce actionable recommendations, such as optimizing SQL statements contributing to excessive gc waits or adjusting instance parameters for better fusion efficiency. For real-time visualization, (OEM) offers dashboards that monitor RAC clusters at the node, instance, and service levels, displaying metrics like CPU usage, I/O throughput, and cluster interconnect health through graphical interfaces and alerts. OEM integrates with Clusterware to provide end-to-end views, including proactive notifications for threshold breaches. Diagnostics in RAC emphasize automated collection and analysis to expedite issue resolution. The Trace File Analyzer (TFA) automates the gathering of diagnostic logs, traces, and system information from Grid Infrastructure, RAC databases, and supporting OS components, supporting proactive detection of problems like memory leaks or network faults via its collector daemon. Administrators invoke TFA with commands such as tfactl diagcollect to bundle relevant files for support analysis. Recommended Patch Analyzer (ORAchk) performs comprehensive health checks, scanning configurations, patches, and best practices compliance across the RAC stack to identify vulnerabilities or misconfigurations before they impact availability. It generates reports with severity ratings and remediation steps, often run periodically or on-demand via orachk -runallchecks. Event handling is streamlined through Fast Application Notification (FAN) and Fast Connection Failover (FCF), where FAN notifies applications of cluster events like node failures or service relocations, and FCF enables connection pools to automatically redirect to surviving instances in under a second, minimizing without application code changes. Common issues in Oracle RAC, such as scenarios where network partitions lead to multiple nodes attempting concurrent database control, are resolved through Oracle Clusterware's mechanisms, which evict errant nodes via node kill or power-off actions to maintain and prevent corruption. Interconnect latency, often manifesting as elevated gc waits, is diagnosed using standard OS utilities like for round-trip times and for path analysis on the private , helping isolate hardware or configuration problems.

History and Development

Evolutionary Milestones

The development of originated from its predecessor, , which was introduced with (1992), and enhanced in 7.3 (1996) and (1998). utilized a shared-disk with disk-based locking through the , requiring frequent writes to a central lock file on shared storage to manage block access across instances. This approach led to I/O bottlenecks, particularly "pinging," where data blocks were repeatedly flushed to and read from disk during inter-instance transfers, limiting in high-concurrency environments. In 2001, 9i introduced RAC as the successor to , featuring Cache Fusion as a pivotal for cache coherency. Cache Fusion shifted block transfers from disk I/O to direct memory-to-memory communication over a low-latency interconnect, using the Global Cache Service (GCS) and Global Enqueue Service (GES) to maintain consistent data access without the performance degradation of . This enabled linear scalability for read-write workloads, marking RAC's transition to a truly shared-cache model. Oracle Database 10g, released in 2003, advanced RAC's infrastructure with the debut of Oracle Clusterware, a vendor-neutral clustering framework that provided essential services like node membership, , and management independent of proprietary hardware such as Digital's TruCluster. Previously limited to specific platforms, this shift broadened RAC's applicability across diverse operating systems. Concurrently, integration with 10g Grid Control introduced unified monitoring and provisioning tools for RAC clusters, simplifying administration in grid environments. The 2007 release of 11g supported up to 100 nodes per , continuing the introduced in 10g Release 2 while enhancing horizontal growth for large-scale deployments through optimized interconnect protocols. In 11g Release 2 (), the Single Client Access Name () was introduced, offering a single for client connections that resolved to multiple addresses for load balancing and across nodes, reducing complexity. enhancements, including policy-based and workload isolation, further improved response times and throughput in mixed-workload scenarios. From 12c onward, starting in 2013, RAC incorporated multitenant architecture support, enabling container databases (CDBs) and pluggable databases (PDBs) to distribute across multiple instances for consolidated yet isolated workloads. emerged as a key feature, allowing a reduced number of instances (as few as one per ) to manage disk groups for numerous databases, minimizing overhead and improving in large clusters. In the 19c release (2019), RAC evolved with cloud-native adaptations, including seamless integration with Infrastructure for clusters, automated provisioning via @Customer, and support for container orchestration in environments to facilitate hybrid and public cloud migrations.

Versions and Release Updates

Oracle Real Application Clusters (RAC) has evolved through several key database releases, with 19c serving as the primary (LTS) version since its general in 2019. Premier Support for 19c extends until December 31, 2029, followed by Extended Support through December 31, 2032, providing a stable foundation for RAC deployments with ongoing security patches, bug fixes, and enhancements. Notable RAC improvements in 19c include architectural optimizations for better and , such as enhanced Cache Fusion performance, alongside general features like automatic indexing that automatically creates and manages indexes to improve query performance in multi-node environments. Innovation releases have introduced advanced capabilities with shorter support windows. Oracle Database 21c, released in August 2021, emphasized features for RAC, including improved connection management and across nodes to minimize . for 21c ends in July 2027, with no Extended Support available, marking the conclusion of its active innovation phase originally slated for 2023 but extended to accommodate ongoing adoption. Oracle Database 23ai, available since May 2024, integrates AI-driven features like AI Vector Search for handling alongside relational workloads, with specific RAC enhancements for hybrid , such as improved multi-node coordination for distributed operations. In 2025, Oracle AI Database 26ai was introduced via Release Update 23.26.0, replacing 23ai without requiring database upgrades or application re-certification. This long-term support release extends until December 2031 and includes RAC enhancements such as globally distributed databases using RAFT-based replication for multi-master active-active configurations, enabling in under 3 seconds with zero . In 2025, Release Updates (RUs) for 23ai addressed critical RAC needs. The January 2025 RU 23.7 included security patches via the Critical Patch Update and fixes for RAC rolling upgrades, enabling two-stage patching for features like vector support in external tables without full cluster downtime. The April 2025 RU 23.8 introduced performance optimizations for multi-node RAC, including sparse vector support in and enhanced hybrid vector indexes, applied through RAC two-stage rolling updates to boost efficiency in large-scale deployments. Support for older versions has concluded, with Oracle Database 12c reaching the end of Extended Support in July 2022, after which no further patches are provided. Oracle recommends migration paths from 12c to 19c or 23ai/26ai using tools like the Database Upgrade Assistant to preserve RAC configurations while adopting modern features. Cloud adaptations of RAC extend its reach beyond on-premises setups. Oracle Cloud Infrastructure (OCI) supports full RAC deployments in the public cloud, offering scalable multi-node clusters with automated management. Exadata Cloud@Customer provides a option, delivering complete RAC functionality on dedicated Exadata hardware within customer data centers, managed via OCI for seamless integration with cloud services.

Competitive Landscape

Shared-Everything Architectures

In shared-everything architectures, all nodes in a database have concurrent access to the same shared resources, including disk-based files, control files, and redo logs, while also maintaining coherency in their in-memory caches through specialized protocols. This design eliminates the need for data partitioning or sharding, allowing all nodes to participate actively in processing workloads simultaneously, which supports high-throughput (OLTP) and other active-active scenarios without application modifications. Oracle Real Application Clusters (RAC) represents a prominent of this architecture, where multiple database instances share a single logical database on cluster-aware storage. Central to RAC's functionality is , a technology that enables direct inter-instance transfer of modified data blocks via a high-speed private interconnect, effectively creating a unified global buffer cache and reducing reliance on slower disk I/O. This contrasts with traditional failover-only clusters, where inactive nodes merely provide redundancy rather than contributing to ongoing workload processing; in RAC, all nodes remain active, coordinating changes across buffer caches using the Global Cache Service (GCS) and Global Enqueue Service (GES) to ensure data consistency. Other notable examples include PureScale, a shared-disk clustering solution that uses a dedicated caching () to manage global locking and buffer pool sharing across members, facilitating similar cache coherency without data redistribution. An earlier system, IBM's Parallel Sysplex for Db2 on , introduced in 1994, also employs a shared-everything model through structures for lock management and group buffer pools, allowing multiple subsystems to access the same database concurrently with high-performance data sharing. These architectures offer advantages such as simplified , as there is no requirement for complex data partitioning or application-level sharding, enabling transparent by adding nodes to handle increased loads. RAC, for instance, supports on-demand scaling across commodity hardware while maintaining balance through integrated quality-of-service features. However, a key disadvantage is the potential in the shared storage subsystem, which necessitates robust, cluster-aware storage solutions like ASM or IBM GPFS to mitigate risks. In terms of availability, RAC deployments have demonstrated 99.999% uptime in production environments by leveraging node addition and automatic to redistribute workloads seamlessly upon instance failure.

Shared-Nothing and Hybrid Alternatives

Shared-nothing architectures represent a fundamental alternative to Oracle RAC's shared-everything model, where each node operates independently with its own dedicated storage and memory, and data is partitioned across nodes to enable horizontal scaling without resource contention. In this design, there is no single point of failure from shared components, as nodes do not share disk or memory, allowing for massive parallel processing in distributed environments. For instance, Google Cloud Spanner employs a shared-nothing approach with range-based sharding to distribute data, supporting high scalability for globally distributed applications while maintaining strong consistency through synchronous replication. Similarly, Microsoft's SQL Server Analytics Platform System (formerly Parallel Data Warehouse) uses shared-nothing partitioning to handle large-scale data warehousing, where data movement between nodes is minimized to optimize query performance. PostgreSQL with the Citus extension exemplifies open-source shared-nothing scaling, transforming a single PostgreSQL instance into a distributed cluster by sharding tables across worker nodes coordinated by a central node. Data is distributed based on a hash or column value, enabling parallel query execution for workloads like multi-tenant applications or real-time analytics. Hybrid models blend elements of shared-nothing and shared-everything to balance scalability with accessibility, often incorporating a shared storage layer alongside local node resources. Amazon Aurora adopts this by using a shared cluster storage volume—essentially a distributed ledger replicated across availability zones—for persistent data, while each database instance maintains local caches for temporary operations like sorting or indexing, avoiding the full independence of pure shared-nothing systems. This design allows multiple instances to access the same data without partitioning, supporting up to 256 TiB of storage with automatic replication for high availability. Snowflake further hybridizes the approach with a central shared-data repository using micro-partitions for efficient access, combined with independent massively parallel processing (MPP) compute clusters that process local data portions in a shared-nothing manner, facilitating seamless scaling without manual data redistribution. In comparisons, Oracle RAC's shared-everything model, with its cache fusion for inter- block transfers, provides low-latency access ideal for (OLTP) workloads requiring frequent cross-node data sharing, whereas shared-nothing systems like Spanner or Citus excel in massive analytics and by eliminating global locks and enabling independent node parallelism, though performance hinges on effective data partitioning. For example, shared-nothing avoids the overhead of shared storage I/O but demands upfront partitioning to align with query patterns, contrasting RAC's transparent access to unified data. In the modern landscape as of 2025, serverless offerings like Snowflake's shared-data diminish the need for manual configurations akin to RAC, as compute resources auto-scale independently of via warehouses, simplifying operations for analytics-heavy use cases. Cost considerations favor open-source alternatives; Oracle RAC incurs significant licensing fees—often exceeding $47,500 per processor for Enterprise Edition plus add-ons—while with Citus remains free, with optional commercial support available from vendors, making it attractive for cost-sensitive scaling. A key drawback of shared-nothing and hybrid alternatives is the requirement for application modifications to accommodate partitioning, such as selecting distribution columns in Citus or schema adjustments for sharding, which can introduce complexity and limit transparency compared to RAC's seamless, application-agnostic access across nodes. In complex environments with thousands of tables, like systems, shared-nothing partitioning may necessitate frequent data reorganization to maintain balance, potentially impacting operational efficiency.

References

  1. [1]
    Introduction to Oracle RAC
    Oracle RAC enables you to combine smaller commodity servers into a cluster to create scalable environments that support mission critical business applications.
  2. [2]
    Real Applications Clusters - Oracle
    Oracle RAC effortlessly scales complex applications, such as SAP, Oracle Fusion Applications, and Salesforce workloads. It provides best-in-class high ...
  3. [3]
    [PDF] Oracle9i Real Application Clusters
    This paper describes this new cluster database architecture and illustrates how full Cache Fusion in Oracle9i Real Application. Clusters exploits modern ...Missing: date | Show results with:date
  4. [4]
    [PDF] Oracle Real Application Clusters (RAC)
    Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to.
  5. [5]
    [PDF] The New Generation Oracle RAC
    Oracle RAC implements an application transparent scale-out database architecture in which an application connects to any database instance (defined as a ...
  6. [6]
    Optimizing Automatic Failover in Common Scenarios to Minimize ...
    Recommended value: 6 to 15 seconds if the network is responsive and reliable. For primary Oracle RAC instances, add maximum clusterware heartbeat timeout (CSS ...
  7. [7]
    Server Hardware Checklist for Oracle Database Installation
    Minimum RAM. At least 1 GB RAM for Oracle Database installations. 2 GB RAM recommended. At least 8 GB RAM for Oracle Grid Infrastructure installations. ; Linux ...
  8. [8]
    Installation Guide
    ### Summary of Hardware and Software Requirements for Oracle RAC Installation
  9. [9]
    Network Interface Hardware Minimum Requirements
    Adapters: Each node must have at least one public network adapter or network interface cards (NIC). Oracle supports the use of link aggregations, bonded, ...
  10. [10]
    Oracle RAC Technologies Matrix for Linux Clusters
    RAC Technologies Matrix for Linux Platforms. NOTE: The information in the following matrix applies to all Linux platforms unless otherwise stated.
  11. [11]
    Oracle RAC Technologies Certification Matrix for UNIX Platforms
    Support for up to 30 nodes. Private GigE storage network is required. · The iSCSI accessed storage must be supported by the system and storage vendors.
  12. [12]
    Virtualization Matrix - Database - Oracle
    This page contains information on certified virtualization and partitioning technologies for Oracle Database and Oracle Real Application Clusters (RAC) product ...
  13. [13]
  14. [14]
    [PDF] Oracle Database High Availability Best Practices
    Evaluate Database Performance and Storage Capacity Requirements ... Oracle RAC databases require shared storage for the database files. □. Configure ...
  15. [15]
    Network Interface Hardware Minimum Requirements
    Oracle recommends that you configure interconnects using Redundant Interconnect Usage, in which multiple network adapters are configured with addresses in the ...
  16. [16]
    5 Configuring Networks for Oracle Grid Infrastructure and Oracle RAC
    ... public network ... As a consequence of this correction, Oracle RAC systems that use multiple NICs for the private interconnect now require specific settings for ...
  17. [17]
    5 Parameters for the sqlnet.ora File - Oracle Help Center
    To provide Oracle Database Firewall public keys to Advanced Security Option (ASO) by specifying the file that stores the Oracle Database Firewall public keys.
  18. [18]
    Port Numbers and Protocols of Oracle Components
    Review this information for port numbers and protocols used by components that are configured during the installation.
  19. [19]
    [PDF] Oracle Real Application Clusters 19c Technical Architecture
    Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system. Oracle Clusterware is a portable cluster ...
  20. [20]
  21. [21]
    7 Real Application Clusters Components
    In addition to the processes common to single instances, such as the process monitor (PMON), system monitor (SMON), and database writer (DBWR), Real Application ...
  22. [22]
    6 Tuning Real Application Clusters and Inter-Instance Performance
    Cache Fusion practically eliminates disk I/O for data and undo segment blocks by transmitting current block mode versions and consistent-read blocks directly ...How Cache Fusion Produces... · Improved Scalability With... · Cache Fusion And Performance...
  23. [23]
    [PDF] Oracle9i Real Application Clusters - Oracle Help Center
    exclusive mode that is currently dirty and in another instance's cache, Oracle still performs an X-to-N block mode conversion (or X-to-S if the block was ...<|separator|>
  24. [24]
    3 Monitoring Performance
    The Global Cache Service statistics for current and cr blocks, for example, gc current blocks received, gc cr blocks received, and so on). The Global Cache ...Performance Views In Real... · Monitoring Rac Statistics... · Wait Events For Rac
  25. [25]
    Glossary - Oracle Help Center
    Consistent gets is a statistic showing the number of buffers that are obtained in consistent read (CR) mode. consistent read (CR). The Global Cache Service (GCS) ...
  26. [26]
    5 Real Application Clusters Resource Coordination
    Global Cache Service Operations ... The GCS tracks the locations, modes, and roles of data blocks. The GCS therefore also manages the access privileges of various ...
  27. [27]
    F Background Processes - Database - Oracle Help Center
    An Oracle Database background process is defined as any process that is listed in V$PROCESS and has a non-null value in the PNAME column.
  28. [28]
    [PDF] Oracle Real Application Clusters Deployment and Performance Guide
    This chart graphs the percentage of Global Cache CR Blocks Received, Current Blocks Received,. Converts, and Gets against logical reads for the session. Using ...
  29. [29]
    [PDF] Oracle Real Application Clusters (RAC) Optimizations on Exadata
    Oracle RAC Cache Fusion is a component of Oracle RAC, responsible for synchronizing the caches among multiple Oracle RAC instances making it possible for ...Missing: explanation | Show results with:explanation
  30. [30]
    Private IP Interface Configuration Requirements - Oracle Help Center
    When you define multiple interfaces, Oracle Clusterware creates from one to four highly available IP (HAIP) addresses. Oracle RAC and Oracle Automatic Storage ...
  31. [31]
    [PDF] Oracle RAC and Oracle RAC One Node on Extended Distance ...
    Oracle RAC on Extended Distance (Stretched) Clusters is an architecture that provides extremely fast recovery from a site failure and allows for all nodes, ...
  32. [32]
    Overview of Virtual IP Addresses - Oracle Help Center
    Oracle Clusterware hosts node virtual IP (VIP) addresses on the public network. Node VIPs are VIP addresses that clients use to connect to an Oracle RAC ...
  33. [33]
    Oracle RAC Listener
    The local listener is using a virtual IP (VIP) address. A VIP is an IP ... VIPs are used to provide faster connection failover and avoid TCP/IP timeout ...
  34. [34]
    31 Single Client Access Name - Database - Oracle Help Center
    Single Client Access Name (SCAN) is an Oracle Real Application Clusters (Oracle RAC) feature that provides a single name for clients to access Oracle Databases ...
  35. [35]
    About SCAN VIP Addresses - Oracle Help Center
    SCAN virtual IP addresses (VIPs) function like node VIPs. However, unlike node VIPs, SCAN VIPs can run on any node in the cluster.
  36. [36]
    Checking the SCAN Configuration - Oracle Help Center
    The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster. Because the SCAN is associated with the ...
  37. [37]
    About the Grid Naming Service (GNS) Virtual IP Address
    The GNS virtual IP address is a static IP configured in DNS. It enables dynamic mapping of host names and IP addresses within a cluster.
  38. [38]
    About the Grid Naming Service (GNS) Virtual IP Address
    The GNS virtual IP address is a static IP address configured in the Domain Name System (DNS). The DNS delegates queries to the GNS virtual IP address, ...
  39. [39]
    Ensuring Application Continuity - Oracle Help Center
    Applications can use FAN programmatically by using the JDBC and Oracle RAC FAN application programming interface (API) or by using callbacks with OCI and ODP.
  40. [40]
    About Connecting to an Oracle RAC Database Using SCANs
    Oracle recommends that you configure Oracle RAC database clients to use the SCAN to connect to the database instead of configuring the tnsnames.ora file.
  41. [41]
    Operating System Checklist for Oracle Grid Infrastructure and Oracle ...
    Review the checklist for operating system requirements for Oracle Grid Infrastructure installation.<|control11|><|separator|>
  42. [42]
    Installing Oracle Grid Infrastructure
    Oracle Database and Oracle Grid Infrastructure installation software is available in multiple media, and can be installed using several options.
  43. [43]
    8 Installing Oracle Grid Infrastructure for a Cluster
    This chapter describes the procedures for installing Oracle Grid Infrastructure for a cluster. Oracle Grid Infrastructure consists of Oracle Clusterware and ...
  44. [44]
    Installing Oracle RAC and Oracle RAC One Node Database Software
    Ensure that you can access other nodes with SSH. · Open the terminal from which you intend to run the installer, and log in as the user account that you want to ...
  45. [45]
    Performing Rolling Upgrade of Oracle Grid Infrastructure
    Complete this procedure to upgrade Oracle Grid Infrastructure (Oracle Clusterware and Oracle Automatic Storage Management) from an earlier release.Missing: process | Show results with:process
  46. [46]
    Cluster Verification Utility Reference - Oracle Help Center
    After you install Oracle Grid Infrastructure, use the cluvfy command to check prerequisites and perform other system readiness checks. Note. Oracle Universal ...
  47. [47]
    crsctl check cluster - Oracle Help Center
    Use the crsctl check cluster command on any node in the cluster to check the status of the Oracle Clusterware stack.
  48. [48]
    Server Control Utility Reference - Oracle Help Center
    The Server Control Utility (SRVCTL) manages Oracle RAC configuration, including databases, instances, and other clusterware resources.Missing: GIMR | Show results with:GIMR
  49. [49]
    E Oracle Clusterware Control (CRSCTL) Utility Reference
    CRSCTL is an interface for Oracle Clusterware, allowing check, start, and stop operations on the cluster, and is located in Grid_home/bin.
  50. [50]
    14 Monitoring Performance - Oracle Help Center
    In summary, the wait events for Oracle RAC convey information valuable for performance analysis. They are used in Automatic Database Diagnostic Monitor (ADDM) ...
  51. [51]
    Oracle RAC Wait Events
    Temporarily represented by a placeholder event which is active while waiting for a block, for example: gc current block request. gc cr block request. Attributed ...
  52. [52]
    Monitoring Oracle Clusterware with Oracle Enterprise Manager
    Use Oracle Enterprise Manager to monitor the Oracle Clusterware environment. When you log in to Oracle Enterprise Manager using a client browser, ...
  53. [53]
    Downloading and Installing the Oracle ORAchk Health Check Tool
    Download and install the Oracle ORAchk utility to perform proactive heath checks for the Oracle software stack. Oracle ORAchk replaces the RACCheck utility.
  54. [54]
    About Fast Connection Failover - Oracle Help Center
    Fast Connection Failover (FCF) is a Fast Application Notification (FAN) client implemented through the connection pool, requiring no application intervention.What Is Ons? · Overview Of Ons... · Remote Configuration Of Ons
  55. [55]
    Database Oracle Clusterware and Oracle Real Application Clusters ...
    For high availability, Oracle recommends that you have multiple voting disks. The Oracle Clusterware enables multiple voting disks but you must have an odd ...
  56. [56]
  57. [57]
    20 Cache Fusion and Inter-instance Performance - Oracle Help Center
    Cache Fusion improves performance by reducing reader/writer conflicts, using direct memory access, reducing CPU usage, and reducing disk I/O.
  58. [58]
    [PDF] Oracle9i Real Application Clusters - Oracle Help Center
    The first phase of Cache Fusion was introduced in Oracle8i to improve read/write concurrent data access. Real Application Clusters extends that capability ...
  59. [59]
    [PDF] Oracle9i Real Application Clusters:
    Oracle9i Real Application Clusters: Oracle Applications 11i Standard Benchmark. November 2001. 4. Introduction ... Cache Fusion technology, we eliminated the need ...
  60. [60]
    Introduction to Oracle Clusterware
    Oracle Clusterware was first released with Oracle Database 10g Release 1 (10.1) as the required cluster technology for the Oracle multiinstance database, Oracle ...
  61. [61]
    [PDF] How Oracle Database 10g Revolutionizes Availability and Enables ...
    Oracle will also continue to support third-party clusterware use with RAC. RAC also supports a new abstraction referred to as a service. Services represent ...Missing: independence | Show results with:independence
  62. [62]
    1 Oracle Database 10g, Release 1 New Features
    This chapter contains the following sections: Introduction. Server Manageability. Database Overall. Performance and Scalability. Clustering. Grid Computing.<|separator|>
  63. [63]
    1 Introduction to Oracle Clusterware
    Oracle Clusterware provides the infrastructure necessary to run Oracle Real Application Clusters (Oracle RAC). Oracle Clusterware also manages resources, such ...
  64. [64]
    [PDF] e10717.pdf - Oracle Help Center
    Cluster Time Synchronization Service (CTSS) on all nodes in the node list. If you specify the -noctss option, then CVU does not perform a check on CTSS ...Missing: quality | Show results with:quality
  65. [65]
    [PDF] Oracle Real Application Clusters (RAC) 11g Release 2
    Nov 2, 2010 · Oracle Clusterware and Oracle Real Application Clusters support up to 100 nodes in the cluster. ... Each SCAN listener is aware of all ...
  66. [66]
    [PDF] Oracle Database 12c Release 2 Oracle Real Application Clusters
    Oracle ASM 12c Release 2 introduces database-oriented storage management via the new ASM Flex Disk Group. ... Multitenant, as multiple Oracle RAC instances ...
  67. [67]
    [PDF] Real Application Clusters Installation on Oracle Cloud Native ...
    Oct 20, 2025 · Review which Oracle software releases are supported for deployment with Oracle RAC on. Kubernetes, and what storage options you can use.
  68. [68]
    None
    ### Oracle Database Support Dates (as of November 7, 2025)
  69. [69]
    1 Introduction to Oracle RAC - Database
    In addition to the public network, Oracle RAC requires private network ... Oracle RAC uses the private interconnect for interinstance communication and block ...
  70. [70]
    Oracle Database 23ai Brings the Power of AI to Enterprise Data and ...
    May 2, 2024 · The new AI Vector Search capabilities enable customers to securely combine search for documents, images, and other unstructured data with search ...Missing: RAC hybrid
  71. [71]
    Oracle Critical Patch Update Advisory - January 2025
    This Critical Patch Update contains 5 new security patches, plus additional third party patches noted below, for Oracle Database Products.Missing: 23.7 RAC rolling
  72. [72]
    January 2025, Release Update 23.7 - Oracle Help Center
    If using RAC instances, you must enable Patch 37244967 using Oracle RAC two-stage rolling updates. Vectors in External Tables. Arithmetic and aggregate ...Missing: RU | Show results with:RU
  73. [73]
    April 2025, Release Update 23.8 - Oracle Help Center
    If you are using RAC instances, certain features must be enabled using Oracle RAC two-stage rolling updates to ensure that any patches are enabled on all ...Missing: RU optimizations multi-
  74. [74]
    Oracle Critical Patch Update Advisory - April 2025
    This Critical Patch Update contains 17 new security patches for Oracle Database Products divided as follows: 7 new security patches for Oracle Database Products ...Missing: optimizations multi- RAC
  75. [75]
    Oracle Database Changes, Desupports, and Deprecations
    To assist you with developing a long-term plan for your database patching and upgrades, Oracle provides this list of changes, deprecations and desupports ...
  76. [76]
    Exadata Cloud@Customer - Oracle
    See how Oracle Exadata Cloud@Customer provides the world's most advanced database cloud, delivered securely on-premises behind your firewall.Exadata · Exadata Cloud@Customer · Oracle United Kingdom · Oracle Saudi Arabia
  77. [77]
    [PDF] Oracle Real Application Clusters (RAC)
    Oracle RAC is therefore a key component of Oracle's Maximum Availability Architecture (MAA), a set of best practice blueprints that addresses the common causes ...
  78. [78]
    Introduction to Oracle RAC
    Using Cache Fusion, Oracle RAC environments logically combine each instance's buffer cache to enable the instances to process data as if the data resided on a ...
  79. [79]
    Frequently asked questions - IBM
    Db2 pureScale is a shared data architecture - every computing node accesses the same data on disk. As a result, the file system must be a cluster file ...Missing: everything | Show results with:everything
  80. [80]
    Db2 pureScale Feature cluster caching facility (CF) configuration - IBM
    In a Db2 pureScale environment, the cluster caching facility, also known as CF, is used to facilitate global locking and buffer pool management.Missing: fusion equivalent
  81. [81]
    [PDF] Clustering Solutions Overview: Parallel Sysplex and Other Platforms
    Apr 9, 2007 · The shared-everything model implies that data access is shared across all the members of the cluster. All cluster members share the same set of ...<|control11|><|separator|>
  82. [82]
    [PDF] Oracle Real Application Clusters (RAC) on Oracle Database 19c
    Oracle RAC family of solutions provide an integrated set of tools to help enterprises with end-to-end life cycle management tasks that can be used to ...
  83. [83]
    Shared-Nothing Architecture Explained - Aerospike
    Aug 20, 2025 · The intent of this architecture is to eliminate any contention or competition for resources among nodes, because no two nodes ever try to read ...
  84. [84]
    Salam relies on Oracle Real Application Clusters (RAC) for Mission ...
    Jul 7, 2025 · Salam Mobile has been running on Oracle RAC for over a year and since then has achieved 99.999% database uptime across their environment.
  85. [85]
    Transitioning from SMP to MPP, the why and the how - Microsoft
    Jul 30, 2014 · In a shared nothing architecture, there is no single point of contention across the system and nodes do not share memory or disk storage. Data ...
  86. [86]
    Life of Spanner Reads & Writes - Google Cloud
    Spanner is a "shared nothing" architecture (which provides high scalability) ... Another example is our data layout, where we use range sharding.
  87. [87]
    Concepts — Citus 13.0.1 documentation
    Citus is a PostgreSQL extension that allows commodity database servers (called nodes) to coordinate with one another in a “shared nothing” architecture.
  88. [88]
    Amazon Aurora storage - AWS Documentation
    Aurora uses a distributed and shared storage architecture that is an important factor in performance, scalability, and reliability for Aurora clusters.
  89. [89]
    Snowflake key concepts and architecture
    Snowflake architecture¶. Snowflake's architecture is a hybrid of traditional shared-disk and shared-nothing database architectures. Similar to shared-disk ...
  90. [90]
  91. [91]
    PostgreSQL vs Oracle - How They Compare - pgbench.com
    PostgreSQL is free and open-source with no licensing fees. Oracle Database licensing starts at approximately $47,500 per processor core for Enterprise Edition, ...