Fact-checked by Grok 2 weeks ago

MySQL Cluster

MySQL NDB Cluster, commonly referred to as MySQL Cluster, is a high-availability, distributed system that integrates the standard server with an in-memory, shared-nothing storage engine known as NDB (Network Database). This enables processing across multiple nodes without a , supporting automatic data partitioning, replication, and recovery to ensure continuous availability even during hardware failures or node restarts. First publicly released as part of MySQL 5.0.2 in 2004, it has evolved to provide scalable solutions for mission-critical applications requiring low-latency access and . The core of NDB Cluster lies in its multi-node setup, which includes data nodes responsible for storing and managing data in (with optional disk ), SQL nodes that run instances to handle queries and transactions, and management nodes that oversee cluster configuration and monitoring. Communication between nodes occurs over TCP/IP, allowing the system to scale horizontally by adding nodes to increase capacity and performance without downtime. Key features include support for ACID-compliant transactions, online backups and restores using native NDB tools, and rolling restarts for upgrades and maintenance, minimizing disruptions. MySQL NDB Cluster is designed for demanding workloads such as , , and , where it delivers sub-millisecond query response times and linear on commodity hardware. By data across nodes and enabling automatic , it ensures with minimal data loss—typically limited to uncommitted transactions during failures. As of recent releases like NDB 8.4 (an LTS version from 2024), it aligns closely with Server 8.4, incorporating enhancements in , , and with modern tools.

Introduction

Overview

MySQL NDB Cluster, also known as , is a technology that enables shared-nothing clustering for the database management system using the NDB storage engine. It integrates standard MySQL SQL servers with a of data nodes to provide in-memory or disk-based storage, designed for high-availability environments with no single points of failure. This architecture allows the system to operate across inexpensive commodity hardware while maintaining data consistency and real-time access. As of November 2025, the latest innovation release is NDB Cluster 9.5, building on previous LTS versions like 8.4. The primary use cases for NDB involve mission-critical applications in , services, and systems that demand processing, such as networks, platforms, and online gaming infrastructures serving billions of users daily. It excels in scenarios requiring 99.999% uptime, linear , and , where downtime or could have significant impacts. In distinction from standard , which relies on storage engines like for single-instance or basic replication setups, MySQL NDB Cluster employs a fully distributed model that separates SQL processing from across multiple s. Its basic operational model automatically s data using hashing algorithms and replicates it across nodes—typically up to four replicas per —to achieve horizontal scaling and against failures. This enables seamless addition of s for increased without interrupting operations.

Key Features and Benefits

MySQL NDB Cluster delivers through its , which eliminates single points of failure by distributing and processing across multiple independent nodes. Automatic occurs in under one second using heartbeating mechanisms to detect node failures, enabling self-healing with automatic restarts and resynchronization without manual intervention. This design supports 99.999% uptime, making it suitable for mission-critical applications requiring continuous operation. The system provides auto-sharding for horizontal , automatically partitioning data across nodes to handle growing workloads efficiently. Nodes can be added online without downtime, allowing seamless expansion while maintaining performance through that distributes writes across the cluster. This linear scalability ensures that throughput increases proportionally with additional hardware resources. In in-memory mode, MySQL NDB Cluster achieves consistency with low , typically under 5 milliseconds for SQL reads and writes with synchronous replication, providing immediate of changes across all connected SQL nodes. It enforces -compliant transactions, including support for constraints, ensuring reliability for (OLTP) workloads. Additionally, NDB Cluster supports Disk Data tables, which store table data on disk while maintaining indexes in memory, enabling the handling of larger datasets for both transactional and analytical workloads without compromising compliance. MySQL NDB Cluster enhances cost efficiency by running on commodity hardware without the need for expensive shared solutions like . This approach leverages inexpensive servers in a distributed setup, reducing infrastructure costs while supporting deployment in cloud, , or on-premises environments. Synchronous replication across nodes further bolsters without adding significant overhead.

Architecture

Core Components

MySQL Cluster, also known as NDB Cluster, consists of several fundamental components that work together to provide a distributed, high-availability database . These include management nodes, data nodes, SQL nodes, and API nodes, each serving distinct roles in , , query , and application . The employs a shared-nothing design where data nodes independently manage their resources to ensure and fault isolation. Data nodes form the backbone of and management in MySQL Cluster. They run the NDB storage engine processes, either as the single-threaded ndbd daemon or the multi-threaded ndbmtd daemon, which handle distributed transactions, data replication, checkpointing, node recovery, and online backups across the cluster. Multiple data nodes are required to achieve , with the minimum number determined by the replication factor and partitioning scheme—for instance, at least four data nodes for two replicas per partition. These nodes store data in memory for high performance while supporting disk-based persistence for durability. SQL nodes act as front-end interfaces for applications, utilizing standard Server instances (mysqld) configured with the NDBCLUSTER storage engine to process SQL queries and connect to the cluster data. They translate SQL statements into operations on the underlying NDB engine, enabling traditional access without requiring applications to handle directly. SQL nodes can be scaled independently by adding more servers, supporting load balancing for read and write operations. Management nodes, implemented by the ndb_mgmd daemon, serve as the central coordinators for the cluster. They read and distribute configuration information from files like config.ini, manage node startups and shutdowns, perform arbitration during failures, maintain activity logs, and facilitate backups and status monitoring. Typically, one or more management nodes are deployed for redundancy, and they must be started before other nodes can join the cluster. API nodes provide flexible access to cluster data beyond SQL, encompassing any application or process that connects to the NDB engine using the native NDB , MGM , or other interfaces for NoSQL-style operations or custom integrations. SQL nodes are a specialized subset of API nodes, but API nodes also include tools like ndb_restore or user-developed applications that directly perform transactions, scans, or updates on distributed data. Inter-component communication in MySQL Cluster relies on transporters, which are configurable transport mechanisms facilitating signal exchange between nodes. The primary transporter uses over Ethernet for reliable, low-latency data transfer between data nodes, management nodes, and API nodes, with support for multiple parallel transporters per node pair to enhance throughput. Additional options like (SHM) transporters enable faster local communication when nodes reside on the same host. These transporters ensure coordinated operations, such as commits and detection, across the distributed architecture.

Data Distribution and Sharding

MySQL Cluster employs horizontal partitioning to distribute data across multiple data nodes, enabling in a . Tables using the NDB storage engine are automatically partitioned using KEY partitioning, which applies a hashing —specifically for NDB Cluster—to the table's to determine row placement. This hash-based sharding ensures even distribution of rows into discrete partitions without requiring manual configuration by the user. If no explicit is defined, the NDB engine generates a hidden to facilitate this process. Each NDB table is divided into a configurable number of partitions, with the default determined by the cluster's : the product of the number of data s and the number of local data manager (LDM) threads per . For single-threaded ndbd processes, this typically equals the number of data s; for multithreaded ndbmtd processes, it scales with the configured execution threads to optimize parallelism. These partitions are balanced across the data s within groups, where a node group comprises one or more s that collectively store replicas of each partition. The maximum number of partitions per table is limited to 8 times the product of LDM threads and node groups, supporting large-scale deployments. The auto-sharding process in MySQL Cluster operates transparently, partitioning tables upon creation and routing inserts, updates, and deletes based on the hash to the appropriate partition owner. No user intervention is needed for initial distribution, as the NDB manages map allocation and load balancing automatically. When nodes are added or removed, the cluster supports online reconfiguration, redistributing across the updated set of data nodes to maintain balance and availability. This rebalancing occurs dynamically without , leveraging the cluster's management layer to migrate fragment replicas as needed. For redundancy, each partition—also known as a fragment—maintains multiple fragment replicas equal to the configured NoOfReplicas parameter (typically 2), distributed across nodes in the same node group. The node holding the primary replica for a given partition acts as the partition owner, handling writes and coordinating synchronization to backup replicas on other nodes within the group. This ownership model ensures data consistency while allowing reads from any replica, with the primary facilitating fault-tolerant operations.

Replication and Fault Tolerance

MySQL NDB Cluster employs synchronous replication to maintain data consistency and across its distributed architecture. Data is partitioned into fragments, each of which is replicated synchronously across multiple within a node group, typically using two replicas per fragment as configured by the NoOfReplicas parameter set to 2. This setup, often referred to as 2-safe replication, ensures that transactions are committed only after from both replicas, providing against the failure of a single without . To manage transaction consistency, especially during failures, MySQL NDB Cluster uses an -based . are grouped into discrete epochs, which represent synchronized points in time across the cluster; a global epoch is advanced only after all participating nodes confirm the completion of operations within the current epoch. This mechanism enables the creation of consistent snapshots for read operations and facilitates by allowing nodes to replay or discard incomplete epochs from failed transactions. Node failures are handled through automatic processes that minimize . When a data node fails, the surviving in its node group immediately assumes responsibility for the affected partitions, ensuring continued without interrupting ongoing operations. Upon restart, the failed node automatically redistributes partitions by copying data from surviving replicas, restoring full . MySQL NDB Cluster also supports rolling restarts, where nodes are restarted sequentially without cluster-wide shutdown, allowing for , updates, or software upgrades while preserving service continuity. Failure detection and coordination rely on a mechanism among data nodes, which monitors node liveness and triggers actions if responses cease. Management nodes play a in , detecting potential scenarios through heartbeat analysis and deciding which subset of nodes forms the authoritative cluster partition; this prevents inconsistent operations by potentially shutting down non-viable partitions. can be configured to use external resources like disk files for added reliability in complex deployments. For tolerance against network partitions, MySQL NDB Cluster's arbitration process evaluates the viability of surviving node communities based on majority quorum and connectivity. In multi-site configurations, synchronous replication ensures intra-site fault tolerance, while asynchronous replication between geographically distributed clusters provides options for disaster recovery and load balancing across sites without compromising local performance. This shared-nothing isolation further enhances resilience by limiting failure propagation.

Storage Models

MySQL NDB Cluster primarily employs an in-memory storage model where table , indexes, and logs are stored in across data nodes to enable low-latency access and high-throughput operations. This approach leverages the NDB storage engine's design for applications, ensuring that all active remains readily available without disk I/O overhead during normal query processing. For durability, optional disk persistence is implemented through redo logging, which captures transaction changes to local disk files on each node. To accommodate datasets larger than available , NDB Cluster supports disk tables, which store non-indexed columns on local disk per node rather than in , eliminating the need for shared systems. These tables are created using tablespaces and file groups, where column resides in data files and in dedicated files, allowing beyond constraints while maintaining -wide distribution. Only non-indexed columns can be placed on disk; indexed columns and primary keys must remain in to support efficient query performance. Checkpointing in NDB Cluster provides crash recovery by periodically writing committed states from to disk in a process known as local checkpoints (LCPs), complemented by global checkpoints (GCPs) that coordinate across s. This mechanism balances efficiency with persistence, as LCPs flush data fragments asynchronously to redo log files, enabling node restarts to replay logs and recover to the last consistent state without . Parameters such as TimeBetweenLocalCheckpoints and TimeBetweenGlobalCheckpoints govern the frequency, typically set to trigger after a configurable volume of writes, ensuring recovery times remain sub-second in most configurations. The shared-nothing principle underpins NDB Cluster's architecture, with each independently managing its own and local disk resources, free from dependencies on centralized like or NFS. This design enhances fault isolation, as failures in one 's do not propagate to others, supporting linear by adding without . , as the primary hosts, exemplify this by partitioning and replicating locally across replicas within node groups. In hybrid mode, NDB Cluster combines in-memory storage for indexes and frequently accessed data with disk-based storage for larger row data, optimizing cost and performance for diverse workloads. Tables can be configured with STORAGE DISK for non-indexed columns or STORAGE MEMORY for full in-RAM retention, allowing applications to scale economically by offloading bulk data to affordable disk while keeping hot paths in memory. This flexibility is managed through SQL DDL statements during table creation, integrating seamlessly with the cluster's distributed nature.

APIs and Access Methods

MySQL Cluster provides to its through a variety of , enabling both relational and non-relational interactions with the underlying NDB storage engine. The primary SQL interface utilizes standard MySQL servers configured as SQL nodes, which connect to the cluster via the NDB storage engine to execute relational queries using the conventional MySQL protocol. This allows applications to perform standard SQL operations such as SELECT, INSERT, UPDATE, and DELETE, with the cluster handling distribution and replication transparently. For NoSQL access, MySQL Cluster offers native that bypass the SQL layer for higher performance in key-value and object-oriented scenarios. The NDB API, implemented in C++, serves as the low-level, object-oriented directly to the nodes, supporting indexed operations, full-table scans, transactions, and handling for applications. ClusterJ, part of the MySQL NDB Cluster Connector for , provides a higher-level API resembling object-relational mapping frameworks like JPA, enabling session-based object management and query execution. Additionally, the Memcached API integrates a key-value store , allowing memcached-compatible clients to perform atomic get/set operations on cluster with automatic persistence and sharding support. RESTful access is facilitated through integrations such as the former MySQL NoSQL Connector for JavaScript (Node.js), which provided adapters for building web applications with direct NDB access, though it has been deprecated in NDB Cluster 9.0 and removed in subsequent versions; current implementations often rely on general MySQL connectors or third-party tools for HTTP-based interactions. A key strength of MySQL Cluster is its multi-API support, where the same dataset can be accessed concurrently via SQL or NoSQL interfaces without requiring extract-transform-load (ETL) processes, supporting polyglot application development across languages like C++, Java, and scripting environments. This unified data access promotes flexibility in hybrid workloads. Transaction models in MySQL Cluster emphasize reliability and performance tailored to the . The SQL API delivers full (atomicity, consistency, , ) compliance for relational transactions, leveraging two-phase commit protocols across distributed nodes. NoSQL APIs, such as NDB and , offer optimized transactions with features like unique key operations and partitioned scans, ensuring low-latency consistency while supporting schemaless key-value stores; for instance, each key-value pair is typically stored in a single row for efficient retrieval. These models maintain sharding transparency, where applications interact with the cluster as a single logical database without managing partitions explicitly.

Management and Deployment

Cluster Management Tools

MySQL Cluster provides several native tools for managing the cluster's lifecycle, including deployment, configuration, monitoring, and maintenance. These tools enable administrators to handle operations such as starting and stopping nodes, tuning parameters, and ensuring high availability without requiring external orchestration systems. The primary command-line interface is the NDB Cluster Management Client (ndb_mgm), which interacts with the management node to control cluster-wide activities. Complementing this is the MySQL NDB Cluster Manager (MCM), a higher-level automation tool that simplifies initial setup and ongoing administration through scripted processes. Configuration is managed via dedicated files that define cluster topology and node behaviors. The NDB Cluster Management Client, known as ndb_mgm, serves as the core command-line tool for real-time cluster administration. It connects to the management server (ndb_mgmd) on the default port 1186 and supports commands for starting and stopping data nodes (ndbd or ndbmtd), SQL nodes (mysqld), and the entire cluster. For instance, the START node_id command brings specified data nodes online, while STOP node_id halts them, with options like -f for forced stops that abort ongoing transactions to prevent hangs. The SHUTDOWN command terminates all nodes safely. changes, such as adding or removing node groups, are handled via CREATE NODEGROUP and DROP NODEGROUP commands, which require the cluster to be in a compatible with no active data in affected groups. ndb_mgm is not required for cluster operation but is essential for administrative tasks like these. MySQL NDB Cluster Manager (MCM) automates many manual steps in cluster deployment and management, acting as a CLI-based and client system. For initial setup, MCM's client (mcm_client) allows creating a cluster deployment by specifying hosts, node types, and parameters via commands like create cluster --package=package_name --processhosts=ndbd@host1,mysqld@host2 cluster_name, which automatically distributes binaries, configures nodes, and starts the cluster. It includes parameter tuning based on types (e.g., low/medium/high write loads for or applications), optimizing settings like allocation and replication factors to match hardware capabilities. MCM supports rolling out configuration updates across nodes without full downtime, using its to propagate changes sequentially. This tool is particularly useful for on-premises environments, reducing setup time from hours to minutes compared to manual configuration. Monitoring in MySQL Cluster relies heavily on ndb_mgm's real-time commands for assessing health and performance. The SHOW command displays cluster overview, including node IDs, types, statuses (e.g., STARTED, NO_CONTACT), and versions, while STATUS node_id provides detailed states for specific s. For performance insights, REPORT MEMORY node_id outputs usage statistics, such as data (e.g., 5% utilized) and index allocation, helping identify bottlenecks in transactions or storage. Transaction monitoring includes commands like ENTER [SINGLE USER MODE](/page/Single-user_mode) to isolate access for maintenance, restricting operations to a single SQL . These commands enable proactive detection, such as hung es or exhaustion, by polling states continuously. MCM enhances by automatically checking OS and levels across hosts, alerting on dead or unresponsive s. Cluster-wide settings are defined in two key configuration files: config.ini for the management server and my.cnf for SQL nodes. The config.ini file, read by ndb_mgmd, specifies global parameters in sections like [NDBD DEFAULT] for data node defaults (e.g., MaxNoOfTables, MaxNoOfOrderedIndexes) and [MGM] for management node details, including host IPs and arbitration ranks. It supports up to 255 nodes and must be identical across all management servers for redundancy. Individual node overrides appear in [ndbd], [mgmd], or [mysqld] sections. The my.cnf file configures SQL nodes with the [mysqld] section, enabling the NDB storage engine via ndbcluster or --ndbcluster startup option, and setting connection strings like ndb-connectstring=mgmt_host:1186 to link to the management node. Changes to these files require rolling restarts to apply without downtime. The management node role facilitates this by storing and distributing the configuration to all participants. Rolling upgrades, supported natively through ndb_mgm and MCM, allow version updates without full downtime by performing sequential restarts. The process begins with management nodes: stop ndb_mgmd, upgrade binaries, and restart. Data nodes follow in groups, using RESTART node_id -i for initial restarts that clear local recovery logs, ensuring compatibility across versions (e.g., from 8.0 to 8.4). SQL nodes are upgraded last, as they can tolerate mixed versions better. MCM automates this via upgrade cluster commands, verifying compatibility and handling binary distribution. No changes or backups are permitted during the process to avoid inconsistencies, and all nodes of one type must complete before proceeding to the next. This method maintains availability, with the remaining operational as long as a of nodes (NoOfReplicas) are active. Compatibility is ensured between minor releases, but major upgrades may require specific sequences.

Kubernetes Integration

The MySQL NDB Operator is a operator designed to automate the deployment, management, scaling, and of MySQL NDB Clusters within environments. It leverages Custom Resource Definitions (CRDs), specifically the NdbCluster resource, to declaratively define the desired cluster state, including the number of management nodes, data nodes, and SQL nodes. The operator continuously monitors the cluster through a reconciliation loop, ensuring that the actual state aligns with the specified configuration while handling tasks such as pod restarts and configuration updates with minimal manual intervention. For data persistence, the operator employs StatefulSets to manage data nodes, providing stable network identities and ordered deployment/scaling. Each data node is associated with a PersistentVolumeClaim (PVC) for local storage volumes, ensuring that NDB data survives rescheduling or node failures. This setup supports the in-memory and disk-based storage models of NDB Cluster, with the operator automatically provisioning storage based on the cluster's data distribution requirements. Installation of the MySQL NDB Operator is facilitated through official Helm charts, which streamline the process by deploying the CRDs, operator pod, and validating webhook server in a single command. Users add the Helm repository at https://mysql.github.io/mysql-ndb-operator/ and install with configurable parameters such as image pull policies and namespace scoping. The charts support both cluster-wide and namespace-scoped deployments, making it adaptable to various Kubernetes setups. The enables multi-node configurations, supporting up to 255 nodes in total as per NDB limits, with examples including multiple nodes for sharding and replication. It enforces default pod anti-affinity rules—using preferredDuringSchedulingIgnoredDuringExecution—to distribute management, , and SQL node pods across different worker nodes, reducing the risk of correlated failures. Users can customize these via the NdbClusterPodSpec.affinity , along with nodeSelector for targeted node placement. Additionally, is handled through the NdbClusterPodSpec.resources , allowing specification of CPU and memory requests and limits for each node type, with defaults calculated by the for nodes based on size. As of 2025, the MySQL NDB Operator integrates with managed services from major cloud providers, including AWS EKS, Google GKE, and Azure AKS, leveraging their standard Kubernetes APIs for automated provisioning in cloud environments. This compatibility allows for scalable, fault-tolerant deployments without custom modifications, as demonstrated in operator releases supporting versions 1.23 and later.

Configuration and Operations

MySQL NDB Cluster configuration involves setting parameters in the to optimize , , and resource usage across data nodes, management nodes, and nodes. Key parameters such as NoOfReplicas determine , with a and recommended value of 2 for production environments to balance against overhead from ; increasing it to 3 or 4 enhances in high-risk scenarios but raises and CPU demands due to additional replica updates. For tuning, BatchSize on nodes (including SQL nodes) sets the number of records per batch during data transfers, typically tuned to 256 or higher based on workload to reduce overhead while avoiding excessive consumption on receivers. Other related parameters like MaxUIBuildBatchSize control scan batch sizes during unique index builds, adjustable upward (e.g., from 64) to accelerate initial loads but monitored to prevent impacting concurrent queries. Online operations in NDB Cluster enable dynamic scaling and maintenance without full downtime. Adding data nodes requires forming a new node group: update the config.ini with new node IDs and parameters, restart the management server, perform a rolling restart of existing data nodes, then start the new nodes to redistribute fragments automatically. Removing nodes involves evacuating data via the management client (STOP command on the node, followed by reconfiguration and rolling restart), ensuring no data loss if NoOfReplicas exceeds 1. Schema changes support zero-downtime via ALTER TABLE with ALGORITHM=INPLACE, allowing concurrent DML for operations like adding columns (if nullable and dynamic) or indexes on NDB tables; a global schema lock briefly prevents other DDL but permits ongoing reads/writes. Performance monitoring leverages the ndbinfo database, a read-only NDB-backed accessible via SQL queries from any connected Server. Views like cluster_operations track aggregate metrics such as read/write operation rates per second across nodes (e.g., SELECT SUM(reads) FROM cluster_operations;), while counters provides per-node counts for transactions and scans to identify bottlenecks. The memoryusage view monitors DataMemory and IndexMemory utilization (e.g., querying used vs. total percentages) to detect overloads early, with thresholds like 80% usage signaling the need for scaling. Security configurations emphasize protecting inter-node communications and data access. Node authentication uses TLS mutual authentication in NDB 8.3 and later, where each node verifies peers via certificates signed by a cluster-specific CA, configured by generating keys with ndb_sign_keys and enabling via --ndb-tls-search-path during rolling restarts. for transporters secures over /, enforced cluster-wide with RequireTls=true in config.ini sections, supporting cipher suites like TLS_AES_256_GCM_SHA384 for compliance; private keys remain unencrypted for automated restarts but are restricted to read-only permissions (0400). Best practices for sizing focus on workload analysis to provision memory and nodes appropriately. For DataMemory, estimate total row size multiplied by rows and replicas (e.g., 1 million 100-byte rows with 2 replicas requires ~200MB, plus 30% ), ensuring physical exceeds allocated memory by at least 20% to prevent ; set MinFreePct=5 to reserve space for restarts. Tune MaxNoOfConcurrentOperations to 250,000 or more for high-throughput updates across multiple nodes, scaling linearly with data nodes. For multi-site setups, deploy active-active configurations using circular replication: configure bidirectional replication between clusters with unique server IDs (e.g., 1000-series for site A, 2000-series for site B) and IGNORE_SERVER_IDS to break loops, or --log-replica-updates=0 for efficiency; enable via the ndb_replication table with functions like NDB$MAX(timestamp) on update timestamps.

Implementation

Node Types and Processes

MySQL Cluster employs several distinct node types, each implemented as specific processes that handle different aspects of cluster operation. Data nodes are responsible for storing and managing the actual data partitions, while SQL nodes provide the familiar interface for query execution. Management nodes oversee cluster coordination, and API nodes enable direct data access. These processes interact through a defined startup sequence to ensure reliable initialization and ongoing operation. Data nodes run either the single-threaded ndbd daemon or the multi-threaded ndbmtd daemon, both of which handle all data operations for tables using the storage engine, including distributed transactions, node recovery, checkpointing, and backups. The ndbd process operates with a single main thread for asynchronous data reading, writing, and scanning, supported by auxiliary threads for file I/O and network transporters, but it may underutilize multi-core CPUs under heavy loads, potentially consuming up to two CPUs on multi-processor systems. In contrast, ndbmtd is designed for better CPU utilization on multi-core systems, defaulting to single-threaded mode but configurable via parameters like MaxNoOfExecutionThreads to spawn multiple execution threads (e.g., four threads for improved parallelism), allowing it to leverage available cores more effectively without requiring data loss during switches from ndbd. This multi-threaded approach enhances performance for demanding workloads, though both daemons can coexist in the same cluster. SQL nodes integrate as standard MySQL Server instances (mysqld) configured with the NDBCLUSTER storage engine via the --ndbcluster and --ndb-connectstring options, enabling them to connect to the cluster and treat NDB as a backend storage layer. The NDB engine plugin within mysqld processes SQL statements from clients, parsing them and distributing operations—such as data reads, writes, and scans—across relevant data nodes based on the cluster's partitioning scheme, while supporting load balancing and failover through multiple SQL nodes. This setup allows seamless use of standard MySQL clients and drivers (e.g., PHP, Java) without altering application code. Management nodes, implemented via the ndb_mgmd process, provide configuration data to other nodes, manage startups and shutdowns, and perform to maintain cluster integrity during failures. involves election protocols where a (typically a or SQL node with ArbitrationRank set to 1 for external candidates or 2 for internal) is selected to resolve network s and prevent scenarios; for instance, if a partition lacks , the arbitrator issues a shutdown signal to isolated nodes based on parameters like ArbitrationDelay for voting timeouts. This ensures by prioritizing the partition with the majority of data nodes or an external arbitrator if configured. API nodes represent lightweight client processes or applications that connect directly to the cluster for data access, bypassing the SQL layer for lower in specialized use cases. These include tools like ndb_restore for backups and custom applications using the NDB API (C++), ClusterJ (), or Node.js connectors, which establish connections to data nodes via the NDB to perform transactions, scans, and event handling without invoking a full mysqld instance. Such direct access supports high-throughput scenarios where SQL processing overhead is undesirable. The startup sequence for cluster processes follows a phased to ensure orderly initialization and integration. Management nodes start first, loading the and awaiting connections on designated ports. Data and nodes then initialize by obtaining a ID, allocating memory and ports, and entering phased startup: Phase 0 activates blocks like NDBFS; Phase 1 establishes transporter connections, inter-block communications, and heartbeats for failure detection (e.g., via periodic signals between ); subsequent phases (2–9) handle setup, log file creation, checkpoint synchronization, group formation, arbitrator election, and rebuilding, with nodes joining post-Phase 7. For joining , the synchronizes from existing replicas during restarts, using heartbeats to monitor liveness and trigger recovery if failures occur during the process. This sequence supports both initial cluster formation and rolling restarts without downtime.

Networking Requirements

MySQL NDB Cluster relies on robust network infrastructure to ensure reliable inter-node communication, including data replication, heartbeats, and management operations. The default transport protocol is , which supports standard topologies and is suitable for most deployments. For enhanced performance in low-latency environments, specialized interconnects such as can be utilized to achieve sub-millisecond communication times, though remains the primary and recommended option for production setups. Bandwidth requirements specify a minimum of 100 Mbps Ethernet per host to support basic operations, but environments demand at least 1 Gbps to handle high-throughput workloads without bottlenecks. A dedicated or is recommended to isolate traffic and prevent interference from other applications. is critical for optimal performance; round-trip times under 100 microseconds are ideal for local area networks (LANs), while high-latency networks, such as those in (WAN) environments, may require configuration adjustments to intervals and timeouts to avoid false node failure detections. Cluster communication employs TCP/IP connections for both heartbeats—used to detect node failures—and data transfers between nodes, with no native support for . Heartbeat intervals are configurable (default 1 second for data nodes) to accommodate varying network conditions, ensuring timely failure detection without unnecessary restarts. Firewall configurations must allow inbound and outbound traffic on specific default ports to enable functionality. The node (ndb_mgmd) uses port 1186 for client connections and inter-node . Data nodes (ndbd or ndbmtd) communicate via port 2202 for transporter links. SQL nodes (mysqld) expose port 3306 for standard client access, while their connections to the utilize the port. Additional dynamic ports may be allocated for transporters between nodes, so monitoring and allowing a range (typically starting from 2200) is advisable in firewalled setups. Ports can be customized in the config.ini file if needed.

Backup and Recovery

MySQL NDB Cluster provides native support for online backups through the ndb_mgm client, enabling consistent snapshots of the entire cluster without interrupting ongoing operations or locking tables. The BACKUP command initiates this process, capturing both and across all in parallel, with the backup files stored locally on each by default. These backups are designed for environments, minimizing downtime and supporting large-scale distributed sets. The consistency of these online backups is maintained via Local Checkpoints (LCPs) and Global Checkpoints (GCPs), which are integral to NDB's mechanisms. An LCP flushes modified pages from memory to disk on a single data node approximately every few minutes, ensuring local persistence without affecting other nodes. Meanwhile, a GCP synchronizes commits across all nodes every 2-3 seconds by advancing a global and flushing the redo log to disk, providing a cluster-wide consistent point for . During a backup, the coordinates with the current GCP to include all committed changes up to that point, while ongoing LCPs help manage disk writes efficiently. Restoration from these native backups is handled by the ndb_restore utility, a command-line tool that must be executed on each data node using the backup files generated by the BACKUP command. It supports selective restoration of specific tables or databases, as well as full cluster recovery, by first restoring metadata (such as table definitions) and then data rows. For point-in-time recovery, especially in replicated setups, the --restore-epoch option applies the backup up to a specific global epoch, allowing integration with binary logs or replication streams for precise rollback. The tool also offers options like --num-slices for parallel processing to accelerate large restores and --rebuild-indexes for optimizing secondary indexes post-restoration. Disk-persisted LCP files from the backup facilitate rapid recovery by providing a baseline state, with in-memory reconstruction handled during node restarts. In addition to native backups, logical backups are available using mysqldump, which generates portable SQL statements to recreate schemas, data, and objects accessible via the MySQL SQL nodes. This method is particularly useful for exporting NDB tables to non-NDB environments or for schema-only backups, though it operates at the SQL layer and may not capture all NDB-specific features like distribution keys. Options such as --single-transaction ensure consistency for InnoDB-compatible transactions, but for pure NDB tables, it relies on the cluster's inherent properties without additional locking. Logical backups complement native ones for scenarios requiring or archiving. For disaster recovery across sites, MySQL NDB Cluster employs asynchronous multi-cluster replication, where one cluster acts as the primary and replicates to a secondary cluster using built-in NDB epoch-based logging. This setup allows failover to the secondary site in case of primary failure, with native backups from the primary serving as a seed for initializing the secondary. Tools like the management client facilitate monitoring replication lag and resuming after recovery, ensuring minimal data loss bounded by the GCP interval. MySQL Enterprise Backup can supplement this for SQL node-specific components, though native NDB mechanisms handle the core distributed data.

Versions and Evolution

Release History

MySQL NDB Cluster originated in 2004 with the release of 4.1, which introduced the NDB engine designed primarily for high-availability workloads requiring real-time processing and . The 7.x series, spanning 2008 to 2018, marked significant advancements in and . Key innovations included the introduction of multi-threaded data nodes in 7.0, enabling better utilization of multi-core processors for improved throughput. Disk , allowing of non-indexed columns on disk for larger datasets while maintaining compliance, was introduced earlier in the 6.x series (starting with 6.1 in 2006). A pivotal occurred with the 8.0 series in 2019, integrating NDB Cluster fully with 8.0 to features such as native document storage and enhanced (GIS) capabilities for spatial data handling. Notable releases in this period include 7.6 in 2016, designated as a (LTS) version with extended maintenance for production environments, and 8.0.19 in 2020, the first General Availability release aligning NDB with 8.0's security and performance enhancements. Milestone events shaped the project's trajectory, including Oracle's acquisition of Sun Microsystems (and thus ) in 2010, which ensured continued open-source development under the GNU General Public License while expanding enterprise support.

Recent Developments (Post-2020)

MySQL NDB Cluster 8.4, released as a (LTS) version starting in April 2024, provides better integration with platforms, facilitating easier deployment in hybrid and multi- setups while maintaining . Fully compatible with MySQL Server 8.4, NDB 8.4 incorporates server-side advancements such as improved features and JSON handling, ensuring seamless operation within the broader MySQL ecosystem. As of November 2025, the latest maintenance release is NDB 8.4.6 (July 2025). In 2024, MySQL NDB Cluster 9.0 emerged as an Innovation release, built on MySQL Server 9.0 and emphasizing experimental features for testing and preview in non-production settings. This release added support for advanced replication monitoring through the Performance Schema, including a new NDB replication applier status table that tracks per-channel conflict detection and resolution. Following in October 2025, NDB Cluster 9.5, another Innovation release based on MySQL Server 9.5, further advanced analytics capabilities by leveraging server enhancements for more efficient data processing, though it remains recommended for development rather than production use. Key feature additions post-2020 include adaptive query optimization mechanisms inherited from Server improvements, which dynamically adjust execution plans based on runtime statistics to enhance query efficiency in clustered setups. support in NDB has seen refinements, such as better handling of constraints during replication and backup operations, reducing potential inconsistencies in distributed transactions. Additionally, full networking support was introduced in NDB 8.0.22 (), enabling all cluster nodes—management, data, and /SQL—to communicate over addresses for improved compatibility in modern networks. The Operator for NDB Cluster reached general availability with version 1.0.0 in October 2022, providing automated deployment, management, and scaling of NDB clusters within Kubernetes environments. Subsequent releases through 1.0.12 in July 2025 matured the operator with features like elastic scaling of connected MySQL Servers and simplified configuration changes, supporting auto-scaling for read/write workloads without downtime. These updates align with management tool enhancements, such as those in MySQL Cluster Manager, for streamlined operations in containerized deployments. Overall, these developments focus on scalability and efficiency, with NDB 9.x series demonstrating reduced latency in replication and higher query rates in benchmarked scenarios.

System Requirements

Hardware Specifications

MySQL NDB Cluster data nodes require substantial RAM to store all active data in memory, with recommendations typically ranging from 4 GB minimum per node for small deployments to 64 GB or more for production environments handling larger datasets. This in-memory architecture ensures low-latency access, though the exact amount depends on the total data size divided across nodes, plus overhead for indexes and operations; for instance, the DataMemory parameter can be configured up to 16 TB per node in recent versions. Multi-core CPUs are essential for parallelism, with at least 8 cores recommended per node to leverage the multi-threaded ndbmtd process, and higher core counts (e.g., 16+ cores on Intel Xeon processors) yielding better throughput in demanding workloads. Local SSD storage is advised for Disk Data tables, which allow spilling non-critical data to disk, as well as for redo logs to support recovery without compromising performance. SQL nodes, which handle query processing using the NDB storage engine, can utilize standard MySQL server hardware with modest requirements, including at least 16 GB RAM to handle large result sets and operations effectively. CPU needs are similar to conventional MySQL setups, benefiting from multi-core processors, but without the intensive memory demands of data nodes since they do not store data persistently. Disk usage is primarily for binary logs and temporary files, where SSDs enhance I/O performance for high-query-volume scenarios. Networking is critical for inter-node communication in MySQL NDB Cluster's shared-nothing design, with a minimum of 100 Mbps Ethernet supported, but 10 Gbps NICs strongly recommended for high-throughput clusters to minimize latency and handle replication traffic efficiently. Low-latency switches are necessary to maintain sub-millisecond response times in local area networks, and dedicated subnets or protocols like can further optimize large-scale setups. The system assumes a standard topology, with adjustments for wider area networks to account for higher latencies. For , NDB Cluster requires a minimum of four data s—organized into at least two node groups with two replicas each—to tolerate single- failures without . Deployments can to hundreds of data s for massive datasets, adding s horizontally to distribute load and achieve linear , while s (typically two for ) and additional SQL s can be co-located or separated as needed. Support for ARM64 processors is available in production environments on platforms like 10, enabling deployment on diverse hardware such as or instances without performance trade-offs (as of 2024).

Software and Compatibility

MySQL NDB Cluster is supported on 64-bit operating systems, with primary production support for distributions including Oracle Linux 10, Red Hat Enterprise Linux 10, 24.04 LTS, SUSE Linux Enterprise Server 15.6, and /Linux 12, as well as and Windows 11. Windows is suitable for both production and development, while all platforms require x86_64 or arm64 architectures. MySQL NDB Cluster releases are closely tied to MySQL Server versions, ensuring compatibility within the same major release branch; for instance, NDB Cluster 9.5 integrates with Server 9.5, NDB Cluster 8.4 with Server 8.4, and NDB Cluster 8.0 with Server 8.0. As of November 2025, the latest release is NDB Cluster 9.5.0 (Innovation release, October 2025), based on Server 9.5. The software has minimal dependencies, as the NDB storage engine provides built-in clustering without requiring external middleware; however, the ClusterJ Java connector necessitates Java 11 or later (as of NDB Cluster 9.3.0) for application development. MySQL NDB Cluster offers full compliance with standards, augmented by clustering-specific extensions such as distributed transactions and high-availability features, and it integrates with object-relational mapping frameworks like Hibernate through standard JDBC drivers or the native . Licensing for the community edition follows the (GPL v2), while commercial editions from provide enhanced features, support, and flexibility for proprietary deployments.

Historical Context

Origins and Development

Cluster's development was driven by the need for a highly available, solution tailored to applications, originating from work at in the late . The foundational NDB (Network Database) storage engine was designed for high-availability in telecom switches, drawing inspiration from Ericsson's AXE architecture, which emphasized modular, fault-tolerant systems using a block-and-signal communication model to handle processing without single points of failure. This effort began as early as 1994 with prototypes leveraging Scalable Coherent Interface () interconnects, motivated by the demands of scalable telecom . The core design principles were outlined in Mikael Ronström's 1997 on data servers for telecom, which influenced the engine's focus on , in-memory operations. In late 2003, MySQL AB acquired the NDB technology from Alzato, a startup spun off from , to integrate it as a clustered storage engine for the database. This acquisition enabled , co-founded by Widenius and others, to extend MySQL's capabilities into distributed environments, with Widenius and the development team emphasizing a to eliminate bottlenecks and support horizontal scalability across multiple nodes. The integration prioritized in-memory OLTP () for low-latency, high-throughput workloads typical in scenarios like service control points and real-time charging systems. The NDBCLUSTER storage engine was introduced in MySQL 4.1.3 on June 28, 2004, marking its initial availability, though the first general availability public release came with MySQL 5.0.2 in 2005. During the pre-Oracle era under , the community contributed to enhancements focused on , such as early improvements in multi-threading and replication, building on the NDB foundation to broaden its applicability beyond .

Major Milestones

In 2010, completed its acquisition of on January 27, which had acquired in 2008, thereby assuming stewardship of , including the NDB Cluster storage engine. This shift marked a pivotal transition toward an enterprise-oriented strategy, with emphasizing enhanced commercial support through the MySQL Enterprise Edition, which provided certified binaries, proactive monitoring, and dedicated technical assistance for production deployments. In 2016, Oracle introduced an extended support model for MySQL NDB Cluster with the release of version 7.6 in , providing stability and maintenance through May 2026 and aligning with enterprise demands for reliable, long-lived database infrastructure. This designation ensured ongoing security patches, bug fixes, and compatibility support for several years, distinguishing it from shorter innovation cycles and solidifying NDB Cluster's role in mission-critical applications. The integration of MySQL NDB Cluster with MySQL Server 8.0, beginning in late 2018 and maturing through 2019 releases, introduced key advancements such as atomic (DDL) operations to the NDB engine. Atomic DDL ensured that schema changes, including those distributed across NDB nodes, were executed as indivisible transactions, preventing partial failures and enhancing consistency in clustered environments. MySQL Server 8.0's features apply to the SQL nodes in NDB Cluster setups. Entering the cloud era from 2022 onward, MySQL NDB Cluster expanded into containerized environments with the general availability of the in October 2022, enabling automated deployment, scaling, and management of NDB clusters on platforms. This operator addressed the need for cloud-native operations, supporting dynamic provisioning and in distributed systems while bridging gaps in prior documentation focused on on-premises setups. In 2025, MySQL NDB Cluster 9.5 was released on October 23 as an innovation release, compatible with the MySQL AI add-on for MySQL 9.5, which supports for SQL generation, vector stores, and AutoML for workflows.

References

  1. [1]
    25.2 NDB Cluster Overview
    NDB Cluster integrates the standard MySQL server with an in-memory clustered storage engine called NDB (which stands for “Network DataBase”).
  2. [2]
    Chapter 17. MySQL Cluster - Oracle Help Center
    17.1. MySQL Cluster Overview. MySQL Cluster is a technology that enables clustering of in-memory databases in a shared-nothing system. The shared-nothing ...
  3. [3]
    25.2.4 What is New in MySQL NDB Cluster 8.4 - Oracle Help Center
    The following sections describe changes in the implementation of MySQL NDB Cluster in NDB Cluster 8.0 through 8.0.42, as compared to earlier release series.
  4. [4]
    MySQL NDB Cluster CGE
    Using memory-optimized tables, MySQL NDB Cluster provides real-time response time and throughput meet the needs of the most demanding web, telecommunications ...FAQ · MySQL NDB Cluster Manager · High Availability · ScalabilityMissing: cases | Show results with:cases
  5. [5]
    MySQL NDB Cluster: Features & Benefits
    Auto-Partitioning (Sharding), Database is automatically and transparently partitioned across low cost commodity nodes, allowing scale-out of read and write ...
  6. [6]
    MySQL :: MySQL 8.4 Reference Manual :: 25.2 NDB Cluster Overview
    ### Summary of MySQL NDB Cluster Key Aspects
  7. [7]
    MySQL :: MySQL 8.4 Reference Manual :: 25.5.1 ndbd — The NDB Cluster Data Node Daemon
    ### Summary of Data Nodes in MySQL Cluster (ndbd and ndbmtd)
  8. [8]
    MySQL :: MySQL 8.4 Reference Manual :: 25.2.1 NDB Cluster Core Concepts
    ### Summary of Node Types in MySQL Cluster (NDB Cluster)
  9. [9]
  10. [10]
  11. [11]
  12. [12]
    1.3 NDB Cluster API Overview: Terminology
    An API nodeis an application that accesses NDB Cluster data. SQL node refers to a mysqld (MySQL Server) process that is connected to the NDB Cluster as an API ...<|separator|>
  13. [13]
    MySQL :: MySQL 8.4 Reference Manual :: 25.4.4 Using High-Speed Interconnects with NDB Cluster
    ### Summary of Transporters and Inter-Component Communication in MySQL Cluster
  14. [14]
    MySQL 8.4 Reference Manual :: 26.2.5 KEY Partitioning
    If you define an explicit partitioning scheme for an NDB table, the table must have an explicit primary key, and any columns used in the partitioning expression ...Missing: sharding | Show results with:sharding
  15. [15]
    25.2.2 NDB Cluster Nodes, Node Groups, Fragment Replicas, and ...
    This section discusses the manner in which NDB Cluster divides and duplicates data for storage. A number of concepts central to an understanding of this ...
  16. [16]
  17. [17]
    MySQL Cluster fault tolerance - impact of deployment decisions
    Nov 16, 2009 · MySQL Cluster is designed to be a High Availability, Fault Tolerant database where no single failure results in any loss of service.
  18. [18]
    MySQL :: MySQL 8.0 Reference Manual :: 25.2 NDB Cluster Overview
    ### Summary of Replication and Fault Tolerance Mechanisms in MySQL NDB Cluster
  19. [19]
  20. [20]
    MySQL :: MySQL 8.0 Reference Manual :: 25.6.11.1 NDB Cluster Disk Data Objects
    ### Summary of NDB Cluster Disk Data Objects and Hybrid Storage
  21. [21]
  22. [22]
    2 NDB Cluster Overview - MySQL :: Developer Zone
    In this cluster, three MySQL servers (mysqld program) are SQL nodes that provide. All these programs work together to form an NDB Cluster (see Chapter 5, NDB ...
  23. [23]
    MySQL :: MySQL 8.0 Reference Manual :: 25 MySQL NDB Cluster 8.0
    ### SQL Access Method for MySQL Cluster Using SQL Nodes
  24. [24]
    1.1.1 NDB Cluster API Overview - MySQL :: Developer Zone
    The NDB API is an object-oriented application programming interface for NDB Cluster that implements indexes, scans, transactions, and event handling.
  25. [25]
  26. [26]
    Chapter 5 MySQL NoSQL Connector for JavaScript
    This section provides information about the MySQL NoSQL Connector for JavaScript, a set of Node.js adapters for NDB Cluster and MySQL Server.Missing: REST | Show results with:REST
  27. [27]
    MySQL NDB Cluster: NoSQL
    MySQL NDB Cluster enables users to blend the best of both relational and NoSQL technologies into solutions that reduce cost, risk and complexity.Missing: ClusterJ | Show results with:ClusterJ
  28. [28]
    MySQL :: MySQL 8.4 Reference Manual :: 25.5.5 ndb_mgm — The NDB Cluster Management Client
    ### Summary of `ndb_mgm` — The NDB Cluster Management Client
  29. [29]
    MySQL NDB Cluster Manager
    MySQL NDB Cluster Manager simplifies the creation and management of the MySQL NDB Cluster CGE database by automating common management tasks.
  30. [30]
  31. [31]
    MySQL 8.4 Reference Manual :: 25.4 Configuration of NDB Cluster
    To enable NDB , you must modify the server's my.cnf configuration file, or start the server with the --ndbcluster option. This MySQL server is a part of the ...Missing: tools | Show results with:tools<|control11|><|separator|>
  32. [32]
    MySQL :: MySQL 8.4 Reference Manual :: 25.6.5 Performing a Rolling Restart of an NDB Cluster
    ### Summary: Rolling Restarts for Upgrades in MySQL NDB Cluster
  33. [33]
    Chapter 1 Introduction to NDB Operator - MySQL :: Developer Zone
    NDB Operator is the Kubernetes Operator for MySQL NDB Cluster, and is intended to simplify the task of deploying and managing MySQL Cluster in a Kubernetes ...
  34. [34]
    2.3 Installing NDB Operator Using Helm - MySQL :: Developer Zone
    You can install NDB Operator with the Helm package manager for Kubernetes using the Helm chart included in the NDB Operator distribution.
  35. [35]
    NDB Operator 8.4 Manual :: 4.3 Using NdbClusterPodSpec - MySQL
    NDB Operator defines default anti-affinity rules for each of the three MySQL Cluster node types (ndb_mgmd, ndbmtd, and mysqld) to prevent them from being ...Missing: limits | Show results with:limits
  36. [36]
    [PDF] NDB Operator for Kubernetes - MySQL Community Downloads
    For more information about NDB 8.0, see What is New in MySQL NDB Cluster 8.0. ... A redundancy level of 1 provides no fault tolerance in case of node failure, and ...
  37. [37]
    Changes in NDB Operator 9.5.0-1.10.0 (2025-10-23, Innovation ...
    This is MySQL NDB Operator 9.5.0-1.10.0, an Innovation release of NDB Operator, a Kubernetes Operator for MySQL NDB Cluster.
  38. [38]
    Kubernetes Operator for MySQL NDB Cluster. - GitHub
    The MySQL NDB Operator is a Kubernetes operator for managing a MySQL NDB Cluster setup inside a Kubernetes Cluster.
  39. [39]
    25.4.2.3 NDB Cluster SQL Node and API ... - MySQL :: Developer Zone
    BatchSize : Default batch size in number of records. ConnectBackoffMaxTime : Specifies longest time in milliseconds (~100ms resolution) to allow between ...
  40. [40]
    MySQL :: MySQL 8.4 Reference Manual :: 25.4.2.1 NDB Cluster Data Node Configuration Parameters
    ### Summary of Key Configuration Parameters for Performance Tuning
  41. [41]
    25.6.7 Adding NDB Cluster Data Nodes Online
    This section describes how to add NDB Cluster data nodes “online”—that is, without needing to shut down the cluster completely and restart it as part of the ...
  42. [42]
    25.6.12 Online Operations with ALTER TABLE in NDB Cluster
    ALGORITHM=INPLACE can be used to perform online ADD COLUMN, ADD INDEX (including CREATE INDEX statements), and DROP INDEX operations on NDB tables.
  43. [43]
  44. [44]
    MySQL :: MySQL 8.4 Reference Manual :: 25.6.19.5 TLS Link Encryption for NDB Cluster
    ### Summary of Security Configurations for Node Authentication and Encryption in NDB Cluster Transporters
  45. [45]
    25.7.4 NDB Cluster Replication Schema and Tables
    Replication in NDB Cluster makes use of a number of dedicated tables in the mysql database on each MySQL Server instance acting as an SQL node.
  46. [46]
  47. [47]
    25.4.3.5 Defining an NDB Cluster Management Server
    Normally, the management server should be configured as an arbitrator by setting its ArbitrationRank to 1 (the default for management nodes) and those for all ...
  48. [48]
  49. [49]
    MySQL :: MySQL 8.4 Reference Manual :: 25.4.3.10 NDB Cluster TCP/IP Connections
    ### Summary of Supported Transporter Protocols for MySQL NDB Cluster 8.4
  50. [50]
    MySQL NDB Cluster FAQ
    A: MySQL NDB Cluster Manager is software which simplifies the creation and management of the MySQL NDB Cluster database by automating common management tasks.
  51. [51]
  52. [52]
  53. [53]
    Which Ports are Used by mysqld, ndb_mgmd, and ndbd/ndbmtd in a MySQL Cluster Installation?
    ### Summary of Default Ports for MySQL NDB Cluster Components
  54. [54]
  55. [55]
    MySQL :: MySQL 9.4 Reference Manual :: 25.2.1 NDB Cluster Core Concepts
    ### Summary of Local Checkpoint (LCP) and Global Checkpoint (GCP) in MySQL NDB Cluster
  56. [56]
  57. [57]
  58. [58]
    [PDF] MySQL Cluster 4.1 Release Notes
    This document contains release notes for older releases of MySQL Cluster (before 4.1. 14) that use version 4.1 of the NDBCLUSTER storage engine.
  59. [59]
  60. [60]
    MySQL NDB Cluster 7.6 Release Notes
    Changes in MySQL NDB Cluster 7.6.7 (5.7.23-ndb-7.6.7) (2018-07-27, General Availability) · Changes in MySQL NDB Cluster 7.6.6 (5.7.22-ndb-7.6.6) (2018-05-31, ...
  61. [61]
    Changes in MySQL NDB Cluster 8.0.19 (2020-01-13, General ...
    MySQL NDB Cluster 8.0.19 is a new release of NDB 8.0, based on MySQL Server 8.0 and including features in version 8.0 of the NDB storage engine.
  62. [62]
    MySQL Retrospective - The Middle Years - Oracle Blogs
    Dec 8, 2024 · Sun Microsystems acquired MySQL AB in February. 2010. Oracle acquired Sun Microsystems and becomes the steward for MySQL. Oralce makes a ...Missing: date | Show results with:date
  63. [63]
    25.2.4 What is New in MySQL NDB Cluster 8.4
    The transporter_details table provides information about individual transporters used in an NDB cluster. It is otherwise similar to the ndbinfo transporters ...
  64. [64]
    MySQL NDB Cluster 8.4 Release Notes
    This document contains release notes for the changes in each release of MySQL NDB Cluster that uses version 8.4 of the NDB ( NDBCLUSTER ) storage engine. Each ...Missing: history | Show results with:history
  65. [65]
    Changes in MySQL NDB Cluster 9.0.0 (2024-07-02, Innovation ...
    MySQL NDB Cluster 9.0.0 is a new Innovation release of NDB Cluster, based on MySQL Server 9.0 and including features in version 9.0 of the NDB storage engine.
  66. [66]
  67. [67]
    Changes in MySQL NDB Cluster 9.5.0 (2025-10-23, Innovation ...
    Oct 23, 2025 · This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes ...Missing: history | Show results with:history
  68. [68]
    Changes in MySQL NDB Cluster 8.0.33 (2023-04-18, General ...
    Apr 18, 2023 · Drop of a foreign key constraint on an NDB table. Rejection of an attempt to create a foreign key constraint on an NDB table. Such records use ...Missing: post- | Show results with:post-
  69. [69]
    Changes in NDB Operator 8.0.31-1.0.0 (2022-10-11, General ...
    Changes in NDB Operator 8.0.31-1.0.0 (2022-10-11, General Availability) · Support for MySQL NDB Cluster 8.0. · Support for Kubernetes clusters using Kubernetes ...Missing: v1. | Show results with:v1.
  70. [70]
    Changes in NDB Operator 8.0.42-1.0.11 (2025-04-15, General ...
    Apr 15, 2025 · This is MySQL NDB Operator 8.0.42-1.0.11, a GA release of NDB Operator, a Kubernetes Operator for MySQL NDB Cluster.Missing: v1. | Show results with:v1.
  71. [71]
    Changes in MySQL 9.0.0 (2024-07-01, Innovation Release)
    Jul 1, 2024 · NDB Cluster does not support VECTOR columns or values. Use the VECTOR_DIM() function (also added in MySQL 9.0) to obtain the length of a vector.
  72. [72]
    25.2.3 NDB Cluster Hardware, Software, and Networking ...
    You can obtain information about memory usage by data nodes by viewing the ndbinfo.memoryusage table, or the output of the REPORT MemoryUsage command in the ...Missing: monitoring | Show results with:monitoring
  73. [73]
    MySQL NDB Cluster 8.0 :: 4.3.13 Data Node Memory Management
    BatchSizePerLocalScan · TransactionBufferMemory. When TransactionMemory is set explicitly, none of the configuration parameters just listed are used to ...
  74. [74]
    A.10 MySQL 8.4 FAQ: NDB Cluster
    Creating NDB Cluster tables with USING HASH for all primary keys and unique indexes generally causes table updates to run more quickly—in some cases by a much ...
  75. [75]
    Supported Platforms: MySQL NDB Cluster
    MySQL supports deployment in virtualized environments, subject to Oracle KM Note 249212.1. For further details, please contact the MySQL Sales Team.Missing: SCI InfiniBand current
  76. [76]
    Supported Platforms: MySQL Database
    Important Platform Support Updates » ; Oracle Linux 10 / Red Hat Enterprise Linux 10 / Rocky Linux 10, x86_64, arm64 ...
  77. [77]
    Changes in MySQL NDB Cluster 8.4.6 (2025-07-23, LTS Release)
    MySQL NDB Cluster 8.4.6 is a new LTS release of NDB 8.4, based on MySQL Server 8.4 and including features in version 8.4 of the NDB storage engine.
  78. [78]
    MySQL NDB Cluster 8.0 Release Notes
    This document contains release notes for the changes in each release of MySQL NDB Cluster that uses version 8.0 of the NDB ( NDBCLUSTER ) storage engine.Missing: history timeline
  79. [79]
    MySQL NDB Cluster API Developer Guide :: 4.2.2 Using ClusterJ
    ClusterJ requires Java 1.7 or 1.8. NDB Cluster must be compiled with ClusterJ support; NDB Cluster binaries supplied by Oracle include ClusterJ support. If you ...
  80. [80]
    MySQL 8.4 Reference Manual :: 1.7 MySQL Standards Compliance
    MySQL supports ODBC levels 0 to 3.51. MySQL supports high-availability database clustering using the NDBCLUSTER storage engine. See Chapter 25, MySQL NDB ...
  81. [81]
    [PDF] MySQL Cluster - Use cases for TELCO/NEP in Europe
    MySQL AB acquired Alzato (owned by Ericsson) late 2003. Databases services back then: – SCP/SDP (Service Control/Data Point) in Intelligent Networks.
  82. [82]
    History of MySQL Cluster architecture development
    ### Summary of MySQL Cluster Architecture Development
  83. [83]
  84. [84]
    Changes in MySQL NDB Cluster 7.6.1 (5.7.17-ndb-7.6.1) (Not ...
    MySQL NDB Cluster 7.6.1 is a new release of NDB 7.6, based on MySQL Server 5.7 and including features in version 7.6 of the NDB storage engine.Missing: LTS | Show results with:LTS
  85. [85]
  86. [86]
    Automatic Schema Synchronization in NDB Cluster 8.0: Part 1
    Jan 21, 2020 · The MySQL server DD, introduced in version 8.0, has enabled improvements such as atomic and crash-safe DDL and the INFORMATION_SCHEMA ...
  87. [87]
    Changes in MySQL AI, add-on to MySQL 9.5.0 (2025-10-21 ...
    Oct 21, 2025 · MySQL AI now lets you generate SQL queries from natural-language statements using the new NL_SQL routine, making it easier for you to interact ...