Fact-checked by Grok 2 weeks ago

FoundationDB

FoundationDB is an open-source, distributed database management system designed as an ordered key-value store that supports ACID-compliant transactions across clusters of commodity servers, enabling scalable storage and retrieval of large volumes of structured data. It serves as a foundational layer for multi-model data storage, allowing developers to build various database interfaces—such as document-oriented, relational, or graph models—on top of its core key-value API without sacrificing consistency or performance. Originally developed in 2009 by the company FoundationDB, the technology emphasizes reliability through a deterministic simulation testing that models entire behaviors in a single-threaded process to uncover bugs and ensure under diverse failure scenarios. Apple acquired the company in March 2015 to enhance its cloud services infrastructure, after which development continued internally. In April 2018, Apple open-sourced FoundationDB under the Apache 2.0 license, fostering community contributions while maintaining its production-grade stability for handling read/write-intensive workloads at low cost. Key strengths of FoundationDB include its for horizontal scalability, automatic data replication and recovery from hardware failures, and industry-leading throughput on standard hardware, making it suitable for applications requiring and . The system supports stateless layers that extend its functionality, such as the Document Layer for MongoDB-compatible APIs and the Record Layer for structured record storage with indexing and querying, enabling flexible data modeling within a unified, transactionally consistent environment.

Overview

Description

FoundationDB is a free and open-source, multi-model distributed NoSQL database with a shared-nothing architecture, owned by Apple Inc. since its acquisition in 2015. It serves primarily as an ordered key-value store designed to handle large volumes of structured data across clusters of commodity servers, supporting ACID transactions for all operations. The database employs an unbundled design that decouples transaction management from , enabling independent scaling of components and the flexible layering of higher-level data models—such as relational or document stores—on its foundational key-value interface. Development of FoundationDB began in 2009 by founders Nick Lavezzo, Scherer, and Rosenthal, addressing limitations in existing distributed databases by combining scalability with guarantees.

Key Characteristics

FoundationDB distinguishes itself through its provision of strict serializability for all transactions, ensuring a global order across the entire database without relying on relaxed consistency models. This ACID-compliant approach uses combined with multi-version concurrency control to guarantee that committed transactions appear to execute in a single, sequential order, even in a distributed environment. The system achieves via automatic among coordinator processes and replication of transaction logs across multiple storage nodes, allowing it to maintain during node failures. With a replication factor of typically three, FoundationDB can tolerate up to two simultaneous failures per while continuing operations, and recovery from faults occurs in under five seconds in most cases. High throughput and low latency are enabled by of transactions on proxy servers and deterministic for during commits, which minimizes coordination overhead. Under moderate loads on commodity hardware, individual reads typically complete in about 1 , while the system scales to handle heavy workloads, such as up to 8.2 million operations per second (90% reads, 10% writes) on clusters of 24 machines. As of November 2025, the latest stable release is 7.4, introducing enhancements like Backup that reduce log writes by 50% and improve overall . FoundationDB supports multi-model data storage through its layered , where higher-level for , , , and other models are built atop ordered key-value store without altering the underlying engine. Examples include the Record Layer for relational-like data and integrations with graph databases like JanusGraph. The database employs an ordered key space based on lexicographic ordering of byte strings, facilitating efficient range queries and scans. The tuple layer provides an order-preserving encoding for composite keys, such as nesting strings and integers while maintaining sort order from left to right, with keys recommended to be under 1 for optimal (maximum 10 ).

Architecture

Core Components

FoundationDB's architecture is built around a set of core processes that enable distributed operation while maintaining . These components include coordinators for oversight, processes for persistence, and processes for request handling, all operating within a versioned that timestamps mutations with global version numbers to ensure . This design supports a shared-nothing , where individual nodes lack shared state and instead rely on transactional coordination for . Coordinator processes form a highly available group that persists essential system metadata on disk, including the file specifying access points like : pairs. They facilitate master election to select a controller, which monitors server health, recruits other processes, and stores configuration to enable fault-tolerant management. This setup ensures that even in the presence of failures, the can rapidly re-elect leadership and maintain operational continuity. Storage processes, known as storage servers, manage data persistence across disk using a structure implemented with a modified engine, supported by log-structured transaction logs for mutations. They maintain multi-version (MVCC) within a 5-second mutation window, buffering recent changes in memory before durable writes, which allows efficient handling of versioned updates without immediate full persistence. This versioning aligns with FoundationDB's ordered key-value model, where keys maintain a to support range queries and efficient storage. Proxy processes consist of stateless GRV (Get Read Version) proxies and commit proxies that collectively handle client interactions. GRV proxies issue snapshot read versions to clients, while commit proxies route transaction requests, perform load balancing across the cluster, and orchestrate commit sequencing to assign global commit versions. In the versioned data model, all data changes receive a unique global version number—advancing at up to 1 million per second—sourced from GRV proxies for reads and the master-coordinated sequencer for writes, guaranteeing consistent views across the system. The shared-nothing design decouples these components into independent nodes that scale horizontally without or disks, coordinating solely through the for operations like version assignment and data replication. Coordinators bootstrap the by electing the , which in turn directs proxies to distribute load and route requests to processes; storage servers then pull necessary logs asynchronously to versions, forming a cohesive that isolates failures and optimizes throughput. This enables FoundationDB to simulate transactions deterministically on the while ensuring server-side enforcement of .

Transaction Management

FoundationDB employs (OCC) for management, allowing to proceed without locks until commit time, where conflicts are detected and resolved. This approach minimizes contention in distributed s by enabling parallel execution of across . A key reliability mechanism is the deterministic framework, which replays and resolves potential conflicts in a controlled, single-threaded before production deployment, ensuring robust handling of edge cases like network partitions or failures during commits. This tests the entire behavior, including logic, under millions of fault scenarios to verify correctness without nondeterminism. Conflict detection occurs at commit via read-set and write-set comparisons, where each transaction records the keys read and written along with their versions. Resolvers, distributed across key shards, use these sets to check for read-write conflicts by comparing against concurrent transactions' write sets within the read version and proposed commit version; if a read key was written after the transaction's read version, it aborts to maintain serializability. While global versioning provides the temporal ordering for these checks, the per-transaction version tracking ensures efficient parallel resolution. The commit protocol coordinates atomicity through a handshake among proxies, coordinators (via the master server), and storage components. Clients submit batched mutations to proxies, which request a monotonically increasing commit version from the master server before dispatching to resolvers for conflict checks. Upon approval, proxies append the mutations to replicated transaction logs (as redo records) on log servers, ensuring durability across a configurable replication factor; storage servers then asynchronously apply these redo records to persistent data. Proxies only acknowledge success to clients after confirming writes to the required number of log replicas. Transactions operate under snapshot for reads, capturing a consistent view at the assigned read version to avoid anomalies during execution. is enforced at commit by the conflict detection, rejecting transactions that would violate a serial order relative to concurrent commits. Mutations are batched into redo records at the level, enabling efficient writes to transaction logs and reducing overhead for high-throughput workloads.

Storage and Distribution

FoundationDB employs a log-structured storage model where mutations are recorded in append-only transaction logs on dedicated log servers, ensuring fast commit latencies through synchronous replication and fsync operations for durability. These logs capture changes in version order, with storage servers asynchronously pulling and applying mutations to maintain a durable, versioned key-value store. The storage engines, such as the default SSD or the higher-throughput Redwood introduced in 7.0, periodically compact applied mutations into efficient on-disk structures, reducing and optimizing read performance by merging versions and reclaiming space from deletions. Data distribution in FoundationDB is achieved through automatic sharding of the key space into contiguous ranges, typically sized between 125 and 500 , assigned to storage servers for horizontal scaling. Shards are dynamically split or merged based on size or write hotspots to prevent imbalances, with the data distributor managing assignments to ensure even load across the cluster. Replication occurs via redundancy groups, or "teams," where each shard maintains multiple copies—defaulting to three replicas—distributed across fault domains like machines or racks to tolerate failures without . Background rebalancing handles data movement to maintain and recover from failures, such as restoring replication in unhealthy teams or relocating after machine removals. The data distributor monitors storage metrics, like bytes stored, and initiates shard migrations without considering read traffic, prioritizing byte-level balance to minimize impacts during ongoing operations. This process ensures by continuously adapting to cluster changes, such as adding or removing nodes. FoundationDB supports operations through versioned snapshots that capture consistent point-in-time states without , using tools like fdbbackup to stream data to . Introduced in version 7.4, Backup V2 optimizes this by reducing system writes by up to 50%, lowering commit and decreasing the required number of through partitioned handling and incremental options. Encryption is configurable for both at rest and in transit to secure data. At rest, FoundationDB supports native encryption using AES-256 CTR mode since version 7.2, integrated with external key management services (KMS) via a generic connector framework; data and metadata are encrypted on flush to disk, with headers preserved for decryption during reads. In transit, Transport Layer Security (TLS) is enabled cluster-wide using LibreSSL, requiring certificate and key files for all inter-process communications to ensure authenticated and encrypted connections.

Features

ACID Compliance and Serializability

FoundationDB ensures full (Atomicity, , , ) compliance for all transactions, providing robust guarantees in a distributed environment through and (MVCC). This design allows developers to rely on without manual conflict resolution, making it suitable for applications requiring reliable across clusters. Atomicity is achieved via an all-or-nothing commit protocol, where a transaction's writes are either fully applied or entirely rolled back in case of conflicts or failures. During commit, the system assigns a version and checks for read-write or write-write conflicts; if any are detected, the transaction aborts and rolls back automatically, ensuring no partial updates occur. This protocol, involving sequencers for version stamping and resolvers for conflict detection, guarantees that concurrent transactions do not interfere partially. Consistency is maintained through global versioning and the absence of partial writes, where every operates on and produces a database . The uses MVCC to assign read versions at transaction start and commit versions only upon successful conflict resolution, preventing any intermediate states from being visible to other transactions. This ensures that application-defined invariants, such as data relationships, remain intact even under high concurrency. Isolation is provided via snapshot reads and conflict-free serialization, allowing transactions to read from a point-in-time view without blocking writers. Reads are performed against a determined by a get-read-version (GRV) request, while writes are buffered locally until commit, where conflicts are resolved optimistically. This mechanism supports concurrent execution without dirty reads, non-repeatable reads, or phantom reads, as conflicting transactions are serialized at commit time. Durability is ensured through synchronous replication and explicit disk , where committed writes are persisted to stable storage on multiple nodes before . Upon commit, data is replicated to a of log servers (typically f+1 for fault tolerance against f failures), with fsync operations confirming writes to disk, guaranteeing even after crashes. This adds a small overhead but provides strong guarantees in distributed setups. FoundationDB achieves strict , the strongest form of , ensuring that the execution of is equivalent to some order that respects both the real-time order of non-overlapping and the commit order. This is proven through the system's versioning mechanism: a central sequencer assigns monotonically increasing read and commit versions based on start times, while resolvers detect and prevent cycles in the serialization graph via conflict ranges. As a result, committed appear to execute in a matching their start times, with no observing changes from later-starting but earlier-committing ones, thus eliminating anomalies like write skew. This guarantee holds across the entire , simplifying reasoning about concurrent operations.

Scalability Mechanisms

FoundationDB achieves horizontal scalability by allowing the addition of nodes to the cluster, which enables linear scaling of read operations as more Storage Servers are introduced. The system automatically partitions the key space into ranges distributed across these nodes, with the Data Distributor continuously monitoring and relocating data shards to maintain balance based on load and utilization. This dynamic relocation ensures even without manual intervention, supporting clusters that span from a single machine to dozens of multicore servers. Throughput in FoundationDB scales to millions of through across multiple nodes and minimized contention via , where transactions proceed in parallel and conflicts are resolved at commit time with a low conflict rate of approximately 0.73%. Writes scale by adding Proxies, Resolvers, and Log Servers, while the system's unbundled separates from to avoid bottlenecks. In benchmarks, configurations with 24 machines have demonstrated up to 2.779 million operations per second. Elasticity is provided through live reconfiguration capabilities that allow cluster resizing without downtime, as the system supports adding or removing processes dynamically while the Data Distributor rebalances data in the background. Recovery from failures or changes occurs rapidly, with median recovery times under 5 seconds, enabling seamless adaptation to varying workloads. Data redistribution for hot spots completes in milliseconds, and larger adjustments take minutes, ensuring continuous during events. Performance tuning in FoundationDB includes configurable levels, where replication factors (such as k = f + 1 replicas, with f being the number of failures tolerated) can be adjusted to balance and throughput. Batch sizes for commits are dynamically tuned by the system to optimize and throughput, adapting to current load conditions. These parameters allow operators to fine-tune the for specific performance requirements without altering the core architecture. Monitoring and metrics in FoundationDB track key indicators such as throughput (e.g., 390.4K reads/s and 138.5K writes/s in tested configurations), (average 1ms for reads and 22ms for commits), and health through components like the Ratekeeper, which monitors system load and adjusts transaction rates to prevent overload. The oversees process health and coordinates reconfiguration, providing operators with insights into storage utilization, replication status, and overall performance to maintain under load.

Layered Design and APIs

FoundationDB employs a that allows developers to build higher-level data models on top of its core ordered key-value store, enabling extensibility without altering the underlying storage engine. This design separates the low-level transactional storage from application-specific abstractions, ensuring that layers remain stateless and can independently while leveraging FoundationDB's guarantees. Layers are implemented as libraries or that translate higher-level operations into base key-value transactions, facilitating the creation of relational, document-oriented, or custom data models. At its foundation, the key-value API provides basic operations for data manipulation within ACID transactions: get retrieves the value associated with a specific key; set stores or updates a value at a given key; clear removes a key-value pair; and range reads fetch all key-value pairs within a specified key range, preserving the ordered nature of keys for efficient prefix-based queries. These operations form the minimal interface, treating all data as byte strings, which supports arbitrary serialization but requires careful key design to avoid hotspots or inefficient scans. The tuple layer builds directly on this base by providing a structured encoding scheme for composite data types, allowing developers to pack and unpack —such as strings, integers, booleans, UUIDs, or nested structures—into ordered keys that maintain lexicographic sorting. For instance, a like (state, county) can be encoded as a single key prefix, enabling range queries over subsets of data, such as all counties in a given , without custom logic. This layer is integrated into all bindings, ensuring cross-language compatibility for key construction and decoding. For hierarchical organization and indexing, the directory layer (often referred to in tree-like contexts) manages namespaces as a , where paths like ('users', 'profiles') map to dedicated key subspaces for isolated and efficient relocation. It supports operations such as creating, opening, moving, and listing subdirectories, which allocate unique prefixes to prevent key collisions and facilitate scalable indexing for relational or nested models. This enables tree-based data partitioning, where related records are grouped under common prefixes for fast range reads, akin to directories but optimized for distributed key-value storage. The records layer extends these foundations to offer SQL-like semantics for structured data, including schema definition, primary and secondary indexes, and declarative queries over records with nested types. It stores records as serialized values under indexed keys, ensuring transactional consistency for index updates and supporting multi-record operations like joins or aggregations in a single transaction. Designed for multi-tenancy, this layer allows elastic scaling across stateless servers, making it suitable for high-volume applications requiring relational features without a full RDBMS. FoundationDB provides official language bindings for C, C++, Java, Python, Go, Node.js, Ruby, and PHP, each exposing the base API and higher layers with asynchronous support to handle concurrent operations efficiently—such as Python's integration with gevent for non-blocking I/O. These bindings ensure low-latency access to the core operations and layers, with async patterns allowing thousands of concurrent transactions per client. Developers can create custom layers using the extensible layer , which involves defining stateless translators that map domain-specific models to key-value transactions, often combining encoding for keys and structures for organization. This supports the development of specialized abstractions, such as sharded counters or graph stores, by ensuring all reads and writes occur atomically. For example, custom layers have been built for document-oriented , enabling FoundationDB to serve as a backend for custom sharded systems or higher-level databases.

History and Development

Founding and Initial Release

FoundationDB was founded in 2009 by Nick Lavezzo, Dave Scherer, and Dave Rosenthal in , as a startup aimed at developing advanced distributed database technology. The three co-founders had previously collaborated at Visual Sciences, an early platform later acquired by , where they gained experience in scalable data systems. Drawing from this background, they established the company to address key shortcomings in existing database solutions, particularly the trade-offs between and data consistency in handling massive workloads. The initial motivation stemmed from the growing demands of cloud-based applications requiring robust, fault-tolerant for billions of users and petabytes of , where traditional relational struggled with and NoSQL alternatives often sacrificed properties for performance. The founders envisioned a system that provided foundational building blocks for distributed applications, emphasizing resilience against failures in machines, networks, disks, and other components. This led to early prototyping of core innovations, including a deterministic that enabled exhaustive testing of behaviors under simulated fault conditions—allowing the system to verify correctness without real-world hardware failures—and a layered architecture that separated from higher-level models for greater flexibility. In 2011, FoundationDB raised a $5.5 million round led by SV Angel, which supported initial development and team expansion. The company launched an alpha program in January 2012, followed by a public beta in March 2013, culminating in the general availability of version 1.0 on August 20, 2013, as a closed-source product initially targeted at enterprise partners and early adopters. This release marked the debut of its unbundled transactional key-value store, which quickly gained attention for its ability to deliver serializable transactions at scale.

Apple Acquisition and Open-Sourcing

In March 2015, Apple acquired FoundationDB, a startup developing a system, for an undisclosed amount, primarily to strengthen the infrastructure supporting its services and handle growing data volumes across applications like and . Following the acquisition, Apple shuttered the independent operations of FoundationDB, rendering the proprietary and restricting external access, while the company's displayed a notice stating that it had "evolved" its and would no longer offer the product commercially. This closure halted public development and support, with Apple's repositories for FoundationDB components emptied, leaving users of related open-source layers uncertain about future compatibility. Apple maintained internal development of FoundationDB during this period, integrating it into its without public releases. In April 2018, Apple reversed course by open-sourcing the FoundationDB core under the Apache 2.0 license, hosted on , to encourage broader adoption and community involvement in building layered extensions atop the key-value store. This release included documentation on contribution processes and governance, marking a shift toward transparent development. Post-open-sourcing, Apple continued leading major enhancements for its internal needs while periodically issuing binary releases to align with community versions, ensuring compatibility. The move spurred community growth, including the launch of dedicated forums for discussions on usage and contributions, as well as expansions in language bindings for languages like , , and Go to facilitate integration in diverse applications.

Major Releases and Recent Advancements

FoundationDB's initial open-source release, version 6.0.15, arrived on November 19, 2018, marking the first major update following the project's open-sourcing in April of that year. This version introduced foundational clustering capabilities, including support for asynchronous replication to remote data centers within a single cluster, enabling basic multi-region configurations for improved and . Version 6.3, with its first stable release as 6.3.9 in March 2021, built on prior multi-region features by enhancing mechanisms, such as automatic promotion of remote data centers during primary outages (configurable and off by default). It also advanced functionality with optimized partial restores that filter log data before loading, reducing restore times, and introduced backup workers to double maximum write bandwidth for continuous backups. In April 2022, version 7.0 debuted the Redwood storage engine, a B-tree-based system that delivered higher throughput and approximately 50% lower compared to the prior engine, significantly tuning performance for write-heavy workloads. This release also separated read-your-writes (GRV) proxies from commit proxies to minimize contention, achieving up to 30% reductions in p99 tail latencies for read operations. Version 7.4, released in 2025, introduced , a redesigned backup system that halves writes to the by decoupling backup logging from commit paths, thereby improving overall commit latency and reducing the required number of transaction log servers. The 7.4.5 patch followed on September 13, 2025, incorporating stability fixes alongside these enhancements. Since open-sourcing, the FoundationDB community has contributed new language bindings, including community-developed bindings for via the foundationdb (version 0.10.0 as of November 2025), facilitating easier integration in contexts. Additionally, extensions to the layered have proliferated, with community-developed layers like query languages on top of the and layers enabling domain-specific data models without altering the core engine.

Use Cases and Limitations

Applications and Integrations

FoundationDB has been employed in high-availability systems within the financial sector, where it supports across global locations and enables quick reproducibility of historical results for what-if analysis. At , evaluations demonstrated its suitability as a resilient, scalable persistence layer for risk artifacts, handling expansion from gigabytes to terabytes of while targeting 99.9% during . As a backend for other databases, FoundationDB serves as the underlying transactional key-value store for Tigris Data, a multi-model platform that provides globally available . Tigris leverages FoundationDB's durability, replication, and sharding to manage multi-tenant with hierarchical structures, supporting secondary and composite indexes through flexible key encoding in for efficient CPU and storage optimization. Notable users include Apple, which utilizes FoundationDB via its Record Layer for metadata storage in iCloud's CloudKit service, enabling an extreme multi-tenant architecture that hosts billions of independent per-user databases. This setup supports features like personalized and high-concurrency zones, with each user's data isolated in unique subspaces for low-latency queries. In open-source projects, FoundationDB powers alternatives to traditional transactional systems like by providing a robust key-value foundation for custom layers, such as the FoundationDB Record Layer, which offers relational-like semantics for structured data storage. FoundationDB integrates with through an official that automates cluster management, including deployment, monitoring, and reconciliation via custom resource definitions. This facilitates orchestration in containerized environments, allowing FoundationDB clusters to be provisioned across nodes with features like CLI access and backup support. For cloud services, FoundationDB supports deployments on AWS and , with configurations optimized for scalability and reliability across regions, enabling seamless integration into hybrid or multi-cloud setups without native managed bindings but through standard infrastructure tools. Case studies highlight FoundationDB's ability to scale to petabyte-level datasets with low-latency queries, as seen in Snowflake's , which handles high-frequency operations for over 1,000 customers using triple replication across cloud zones. Snowflake achieves sub-millisecond latency for tasks like zero-copy cloning and , supporting diverse access patterns in scenarios akin to workloads. Similarly, Apple's deployment demonstrates petabyte-scale handling of billions of operations per second for , ensuring consistent in high-concurrency environments.

Design Trade-offs and Constraints

FoundationDB's design emphasizes strict and through an unbundled , but this introduces deliberate constraints to maintain reliability and performance. One key is the absence of a built-in querying language, which forces developers to implement SQL-like or other advanced querying features via separate layers, such as the Record Layer or Document Layer. This layered approach enhances flexibility for custom data models but increases development complexity, as applications must handle query planning, indexing, and optimization independently rather than relying on native database support. The commitment to serializable isolation via optimistic multi-version (MVCC) also incurs higher usage on clients compared to systems with weaker, non-serializable models. Clients read and written keys and values during execution to enable efficient detection and retries, consuming proportional to the 's data volume. This overhead stems from maintaining in to simulate potential conflicts without server-side locking, a necessity for achieving strict that systems like those using tunable quorum levels avoid. A fixed 5-second MVCC window further bounds this on storage servers by limiting version history, but it amplifies the cost for complex s. These mechanisms impose limits on very large single transactions, as the simulation and 10 cap on affected (including reads, writes, and ranges) can lead to excessive overhead or timeouts. Transactions exceeding 5 seconds are unsupported to prevent unbounded resource accumulation in the MVCC subsystem, requiring developers to decompose large operations into smaller, retryable units. This constraint prioritizes system stability over accommodating bulk workloads in one go, contrasting with databases that permit longer or larger operations at the expense of guarantees. In multi-region deployments, FoundationDB achieves low- commits through asynchronous replication and satellite processes that route writes locally while ensuring durability across , but this demands careful tuning of and region priorities to mitigate WAN impacts. Misconfiguration can result in elevated commit times, as seen in setups with 60 ms inter-region , necessitating optimizations like region-aware client placement. Relative to alternatives, FoundationDB provides stronger consistency—strict serializability—than Cassandra's tunable , enabling transactions across the key space without retrofitting. However, its automatic, key-range-based partitioning is less flexible than MongoDB's sharding model, which allows custom shard keys and strategies for workload-specific distribution.

References

  1. [1]
    FoundationDB - the open source, distributed, transactional ... - GitHub
    FoundationDB is a distributed database designed to handle large volumes of structured data across clusters of commodity servers.
  2. [2]
    FoundationDB | Home
    FoundationDB is a distributed, multi-model database with ACID transactions, that is easily scalable, fault tolerant, and has industry-leading performance.
  3. [3]
  4. [4]
    [PDF] A Distributed Unbundled Transactional Key Value Store
    FoundationDB (FDB) [5] was created in 2009 and gets its name from the focus on providing what we saw as the foundational set of building blocks required to ...
  5. [5]
    Simulation and Testing — FoundationDB 7.4.5 documentation - Apple
    Simulation is able to conduct a deterministic simulation of an entire FoundationDB cluster within a single-threaded process. Determinism is crucial in that it ...
  6. [6]
    Apple Acquires Durable Database Company FoundationDB
    Mar 24, 2015 · Apple has acquired FoundationDB, a company that specializes in speedy, durable NoSQL databases, TechCrunch has learned.
  7. [7]
  8. [8]
    Technical Overview — FoundationDB 7.4.5 documentation - Apple
    Technical Overview. These documents explain the engineering design of FoundationDB, with detailed information on its features, architecture, and performance ...
  9. [9]
    Announcing FoundationDB Document Layer
    Nov 29, 2018 · The FoundationDB Document Layer provides the ease-of-use of a document database in the form of the familiar MongoDB® API. The Document Layer is ...
  10. [10]
    Announcing The FoundationDB Record Layer
    Jan 14, 2019 · The Record Layer is built for a massive scale, allowing millions of discrete database instances to be managed within a single FoundationDB ...
  11. [11]
    Apple's cloud database FoundationDB now open source - 9to5Mac
    Apr 19, 2018 · FoundationDB was originally founded in 2009 by Dave Rosenthal, Dave Scherer and Nick Lavezzo with the goal of making a NoSQL database that ...
  12. [12]
    Developer Guide — FoundationDB 7.4.5 documentation - Apple
    This document gives an overview of application development using FoundationDB, including use of the API, working with transactions, and performance ...
  13. [13]
    Data Modeling — FoundationDB 7.4.5 documentation - Apple
    The standard tuple layer provides an order-preserving, signed, variable length encoding. For positive integers, a big-endian fixed length encoding is order- ...
  14. [14]
    Architecture — FoundationDB 7.4.5 documentation - Apple
    The FoundationDB architecture chooses a decoupled design, where processes are assigned different heterogeneous roles (eg, Coordinators, Storage Servers, Master ...Missing: shared- | Show results with:shared-
  15. [15]
    FoundationDB: A Distributed Key-Value Store
    Jun 1, 2023 · FoundationDB is an open-source transactional key-value store created more than 10 years ago. It is one of the first systems to combine the flexibility and ...
  16. [16]
    Transaction Commit Path · apple/foundationdb Wiki - GitHub
    In addition to storing vector of mutations, read- and write-conflict sets are also generated by each mutation. ... In particular, the commit version and the read ...
  17. [17]
    Configuration — FoundationDB 7.4.5 documentation - Apple
    This document contains reference information for configuring a new FoundationDB cluster. We recommend that you read this document before setting up a cluster.Missing: protocol | Show results with:protocol
  18. [18]
    Release Notes — FoundationDB 7.4.5 documentation - Apple
    Sep 13, 2025 · Added a new pre-backup action when creating a backup. Backups can now either verify the range data is being saved to is empty before the backup ...
  19. [19]
    Data Distribution and Movement · apple/foundationdb Wiki - GitHub
    Dec 10, 2019 · FoundationDB will only move data for a few reasons: It does not balance based on high read traffic. When it moves a shard it only considers bytes stored, not ...
  20. [20]
    Backup, Restore, and Replication for Disaster Recovery - Apple
    FoundationDB's backup tool makes a consistent, point-in-time backup of a FoundationDB database without downtime.
  21. [21]
    What's new in FoundationDB 7.4?
    Sep 3, 2025 · For 7.4, the biggest features are: Backup V2: this feature will reduce writes to log system by 50%, thus improving commit latency and ...
  22. [22]
    Encryption data at rest · apple/foundationdb Wiki - GitHub
    May 16, 2022 · FDB currently does not support data-at-rest encryption semantics; a customer needs to implement application level encryption schemes to protect ...
  23. [23]
    Transport Layer Security — FoundationDB 7.4.5 documentation
    This document describes the Transport Layer Security (TLS) capabilities of FoundationDB, which enable security and authentication through a public/private key ...
  24. [24]
    Consistency — FoundationDB 7.4.5 documentation - Apple
    FoundationDB provides strict serializability, the strongest possible consistency model, to provide the greatest possible ease of development.
  25. [25]
    Transaction Manifesto — FoundationDB 7.4.5 documentation - Apple
    Transactions guarantee the durability of writes (the “D” in ACID). This guarantee comes with some increase in write latency. Durability means that committed ...
  26. [26]
    Technical Overview of the Database · apple/foundationdb Wiki
    Dec 13, 2019 · FoundationDB - the open source, distributed, transactional key-value store - Technical Overview of the Database · apple/foundationdb Wiki.
  27. [27]
    Scalability — FoundationDB ON documentation - Apple
    Elasticity means that you can gracefully scale up and down on a continuous basis in response to demand. FoundationDB is Highly Performant. FoundationDB was ...
  28. [28]
    Layer Concept — FoundationDB 7.4.5 documentation - Apple
    A layer provides indexing by storing two kinds of key-values, one for the data and one for the index.
  29. [29]
    API Reference — FoundationDB 7.4.5 documentation
    **Summary of FoundationDB API Reference (https://apple.github.io/foundationdb/api-reference.html):**
  30. [30]
  31. [31]
  32. [32]
  33. [33]
    [PDF] FoundationDB Record Layer: A Multi-Tenant Structured Datastore
    Apple, Inc. The FoundationDB Record Layer is an open source library that provides a record-oriented data store with semantics similar to a relational database ...
  34. [34]
  35. [35]
    Leadership | Antithesis
    In 2009 he co-founded FoundationDB and was its principal architect. Dave loves hard sci-fi, strategy games, and getting out in nature. Chief Operating Officer.
  36. [36]
    FoundationDB: How This Startup Got Funded by Ron Conway
    This January Startup Grind DC hosted cofounder and COO, Nick Lavezzo. Nick is no stranger to DC's government and technology space, having worked in finance and ...Missing: story | Show results with:story
  37. [37]
    FoundationDB - 2025 Company Profile, Funding & Competitors
    Jun 21, 2025 · FoundationDB is an acqui-hired company based in Vienna (United States), founded in 2009. It operates as a Provider of the multi-model ...
  38. [38]
    7 Issues to Consider When Evaluating FoundationDB | Yugabyte
    Jan 31, 2019 · At the core, FoundationDB provides a custom key-value API. One of the most important properties of this API is that it preserves dictionary ...
  39. [39]
  40. [40]
    Apple acquires NoSQL database startup FoundationDB
    The acquisition is said to be of assistance to Apple services including iMessage, and iAd, the latter being Apple's mobile advertising platform.
  41. [41]
    A Cautionary Open Source Tale, Apple Buys And Shutters ... - Forbes
    Mar 25, 2015 · A Cautionary Open Source Tale, Apple Buys And Shutters FoundationDB ... Apple acquired the company and technology in order to help it ...
  42. [42]
    FoundationDB is acquired by Apple: My thoughts - Percona
    Peter Zaitsev reflects on the technology created by FoundationDB over the years following the reported acquisition by Apple.
  43. [43]
    Apple Open Sources FoundationDB - InfoQ
    Apr 29, 2018 · Apple has open sourced its distributed database core, FoundationDB, which it acquired back in 2015 from the company of the same name.
  44. [44]
    FoundationDB is Open Source
    Apr 19, 2018 · FoundationDB is a distributed datastore, designed from the ground up to be deployed on clusters of commodity hardware.
  45. [45]
    FoundationDB
    Discussion about this site, its organization, how it works, and how we can improve it. 8. Home · Categories · Guidelines · Terms of Service · Privacy Policy.
  46. [46]
    FoundationDB 6.0.15 Released
    Nov 19, 2018 · FoundationDB 6.0.15, our first major release since open sourcing FoundationDB in April, is now officially available!Missing: date | Show results with:date
  47. [47]
    Release Notes — FoundationDB ON documentation - Apple
    6.0.15. Features. Added support for asynchronous replication to a remote DC with processes in a single cluster. This improves on the asynchronous ...
  48. [48]
    FoundationDB 6.3.24 Released
    Feb 25, 2022 · FoundationDB 6.3.24 is now officially available (6.3.9 is the first stable 6.3 release)! This release focuses on improving the overall ...
  49. [49]
    Release Notes — FoundationDB 7.4.5 documentation - Apple
    Release Notes. 6.3.25. Fixed a transaction log data corruption bug. (PR #8558). Fixed a special keyspace SpecialKeyRangeAsyncImpl::getRange bug. (PR #6453).
  50. [50]
    FoundationDB 7.0.0 Released
    Apr 13, 2022 · FoundationDB 7.0.0 is now officially available. The latest FoundationDB release can be downloaded and installed as binaries from the github downloads page.
  51. [51]
    FoundationDB Rust Client API - Crates.io
    Oct 28, 2025 · A wrapper library around the FoundationDB (Fdb) C API. It implements futures based interfaces over the Fdb future C implementations.
  52. [52]
    I wrote a query language for FDB - Using FoundationDB
    May 2, 2023 · I started working on a query language for accessing the directory & tuple layers. I figured it was about time I shared what I have, see if the community has ...Missing: extensions | Show results with:extensions
  53. [53]
    [PDF] Evaluation of FoundationDB in Financial Services - Sched
    Evaluation of FoundationDB in. Financial Services. November 2019. Alok Madhukar, Subramaniam Ramamoorthi, Rishabh Maheswari. Page 2. Disclaimer. 2. Views and ...
  54. [54]
    How we built our metadata layer on FoundationDB - Tigris
    Tigris is a globally available, S3 compatible object storage service. Tigris uses FoundationDB's transactional key-value interface as its underlying storage ...
  55. [55]
    How Apple built iCloud to store billions of databases
    Jan 14, 2024 · Apple uses Cassandra and FoundationDB for CloudKit, their cloud backend service. We take a look into how exactly each is used within their cloud ...
  56. [56]
    FoundationDB/fdb-kubernetes-operator - GitHub
    This project provides an operator for managing FoundationDB clusters on Kubernetes. Some more details are covered in this YouTube video.
  57. [57]
    Running FoundationDB in the Cloud - GitHub
    We are currently running FoundationDB deployments in several regions on Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP).
  58. [58]
    How FoundationDB Powers Snowflake Metadata Forward
    Apr 19, 2018 · FoundationDB is now open source. After years of in-house development, this distributed, scalable and transactional key-value store is available to all.
  59. [59]
    Anti-Features — FoundationDB ON documentation - Apple
    FoundationDB empowers developers to employ a broad range of data models and use the query languages best suited to their applications, implemented as layers.
  60. [60]
    Guide to FoundationDB | Baeldung on Ops
    Jul 23, 2025 · At its core, FoundationDB keeps things minimal; data is stored as ordered key-value pairs, and every operation happens within a transaction.
  61. [61]
    Client memory considerations · apple/foundationdb Wiki - GitHub
    Dec 13, 2019 · A major chunk of allocated memory is used by ongoing transactions to cache the keys/values read and written over their lifetime of the ...
  62. [62]
    FoundationDB: A Distributed Unbundled Transactional Key Value ...
    Aug 28, 2024 · FoundationDB(FDB) integrates a deterministic simulation framework to test every new system feature. FDB has implemented a minimal feature set.
  63. [63]
    Known Limitations — FoundationDB 7.4.5 documentation - Apple
    FoundationDB is only designed for good performance with rotational disk drives when using the durable memory storage engine.
  64. [64]
    Cassandra vs FoundationDB | What are the differences? - StackShare
    Cassandra is a wide-column store with eventual consistency, while FoundationDB is a key-value store with strong consistency.Missing: stronger | Show results with:stronger
  65. [65]
    FoundationDB vs MongoDB | What are the differences? - StackShare
    MongoDB supports flexible indexing options, including single-field and compound indexes, and includes an query optimizer to improve query performance. Data ...