Fact-checked by Grok 2 weeks ago

RocksDB

RocksDB is an open-source, embeddable persistent key-value engine library written in C++, designed for high-performance server workloads with support for point lookups, range scans, and guarantees. It organizes data using a (LSM-tree) architecture, where writes are appended to an in-memory memtable and a write-ahead log (WAL) for durability, and periodically flushed to immutable sorted string table () files on disk. Optimized for fast media such as SSDs and , RocksDB handles datasets up to a few terabytes while providing configurable options for , multi-threading, and compaction strategies to balance read/write throughput and space efficiency. Developed by the (now ) Database Engineering Team, RocksDB originated as a of Google's in 2011, incorporating optimizations inspired by and custom extensions to address production-scale demands at . Dual-licensed under the 2.0 and GPLv2 licenses, it has evolved through community contributions, with key enhancements including multi-threaded compaction for up to 10x performance gains on SSDs compared to single-threaded approaches. As of 2025, RocksDB powers storage backends in numerous systems, including (a storage engine), , and , demonstrating its adaptability across databases, stream processing, and embedded applications. At its core, RocksDB's architecture separates mutable in-memory structures from immutable on-disk files to minimize and enable efficient reads via Bloom filters and indexes in files. The write path appends operations to the memtable (typically a or hash-linked list) and WAL, triggering flushes to Level-0 files when the memtable fills, followed by background compaction that merges files across leveled tiers to maintain sorted order and delete tombstones. Reads combine data from the memtable, immutable memtables, and files across levels, using iterators for queries and supporting user-defined comparators for key ordering. Key features distinguishing RocksDB from include support for column families to partition data logically, transactions with pessimistic or , prefix iterators for efficient partial scans, and pluggable components like merge operators and custom memtable implementations. It also offers advanced compaction styles—such as leveled (space-optimized) and universal (write-optimized)—along with backup/restore tools, checkpoints, and integration with remote storage for cloud environments. These capabilities make RocksDB suitable for write-heavy workloads in distributed systems, where it achieves low-latency operations through tunable block caches and I/O scheduling.

Overview

Definition and Purpose

RocksDB is a persistent, embeddable key-value storage engine written in C++ that supports arbitrary byte streams as both keys and values. It is built on earlier work from , with enhancements tailored for modern hardware environments. As an open-source library developed and maintained by the Database Engineering Team, RocksDB provides a flexible interface for applications requiring reliable, on-disk data persistence without the overhead of a separate server process. The primary purpose of RocksDB is to deliver fast, low-latency read and write operations optimized for workloads, particularly those leveraging such as SSDs and high-speed disk drives. It employs a log-structured designed to handle high-throughput scenarios efficiently, making it suitable for environments where rapid data access and modification are critical. This focus on performance ensures that applications can achieve low-latency operations while maintaining data durability across system restarts. RocksDB targets use cases involving large-scale in distributed systems, platforms, and databases that demand efficient local solutions. For instance, it serves as a backend engine for stateful services at , supporting over 30 production applications with diverse as of 2021. Its architecture supports multi-core processors to maximize parallelism and is highly tunable for various storage media, including SSDs, HDDs, and in-memory configurations, allowing developers to balance , , and based on specific and needs.

Relation to LevelDB

RocksDB originated as a fork of Google's key-value storage library, developed by Facebook engineers in 2012 to overcome limitations in handling multi-core processors and flash-based storage systems. , released by Google in 2011, was designed primarily for single-threaded environments and traditional hard disk drives, which proved insufficient for Facebook's high-throughput, IO-bound workloads involving solid-state drives (SSDs). By forking , the RocksDB team aimed to create an embeddable engine optimized for modern server hardware, enabling better scalability in production databases like MyRocks and . Key enhancements in RocksDB diverge significantly from 's original design. It introduces multi-threaded compaction, allowing concurrent background processes to merge sorted string tables (SSTables) and reduce , which can boost sustained write rates by up to 10x on SSDs compared to 's single-threaded approach. Additionally, RocksDB adds support for column families, enabling users to partition a single database instance into multiple logical groups with independent configurations, such as compaction styles and algorithms—a feature absent in . Prefix bloom filters are another addition, optimizing range scans and point lookups by filtering keys based on prefixes, thereby minimizing unnecessary disk I/O in workloads with sequential or prefixed key patterns. These improvements stem from Facebook's need to manage terabyte-scale datasets efficiently without the lock contention and bottlenecks in . RocksDB maintains much of the core codebase from but is released under the Apache 2.0 license, ensuring compatibility with its embeddable nature while incorporating Facebook-specific optimizations for large-scale, distributed systems. Over time, the project has evolved independently, with much of the code now original to RocksDB, though it preserves LevelDB's foundational (LSM-tree) architecture. In performance evaluations on flash storage, RocksDB achieves up to 10x faster random writes and bulk loads, and 30% faster random reads, compared to , primarily due to reduced write stalls and better utilization of SSD . These gains highlight RocksDB's focus on IO-bound scenarios, making it suitable for real-time applications at massive scale.

History and Development

Origins and Early Development

RocksDB's development began in 2012 at , led by Dhruba Borthakur, a former contributor to the project with extensive experience in distributed storage systems. As a fork of Google's , it was designed to overcome the limitations of its predecessor, particularly LevelDB's single-threaded architecture that hindered performance in high-throughput environments. This initiative was driven by the need for an efficient embedded key-value store to support Facebook's growing data infrastructure, including early work on —a MySQL storage engine leveraging RocksDB for optimized relational data handling. Key motivations included adapting to multi-core CPUs and solid-state drives (SSDs), which were becoming prevalent in server workloads but underutilized by LevelDB's design. Facebook's services required low-latency access to vast datasets for features serving over a billion users, where network overhead could double access times compared to local embedded storage. Initial efforts emphasized server-side optimizations for media, addressing and concurrency to handle intensive read-write patterns without the bottlenecks of traditional disk-based systems. By mid-2013, during its pre-open-source phase, RocksDB was deployed internally across multiple services, such as ZippyDB and secondary indexing systems, managing nearly a petabyte of data and processing billions of operations daily to support core functionalities like news feeds and queries. This early adoption validated its design for large-scale, production-grade workloads on SSD-optimized .

Major Releases and Evolution

RocksDB was open-sourced by on November 21, 2013, under a BSD 3-clause license, with its initial public release designed to meet the demands of high-throughput production workloads on flash storage. The project later transitioned to a dual license of GPLv2 and Apache 2.0 in July 2017 to broaden compatibility with other open-source ecosystems. Early versions, starting from v2.0 in late 2013, emphasized optimizations for write-heavy scenarios, building on LevelDB's foundation while introducing features like multi-threaded writes and universal compaction to reduce . Version 3.0, released in May , marked a significant milestone by introducing column family support, enabling multiple related key-value stores within a single database instance for better organization and isolation of . This release also added configurable functions, enhancing options for diverse hardware environments. Subsequent versions in the 3.x and 4.x series refined compaction strategies and added prefix s to accelerate point lookups by skipping irrelevant SST files, with a new format in September improving space efficiency by up to 40%. The 5.0 release in January 2017 brought advanced optimizations, including the DB::DeleteRange API for efficient bulk deletions and dynamic adjustment of to adapt to varying workloads without restarts. capabilities were further enhanced with prefix-based variants, reducing false positives in range scans common in Facebook's applications. Version 7.0, released in October 2021, focused on cross-platform robustness, including improved architecture support through optimized and testing on mobile-derived , alongside the introduction of LRUCache v2 for more predictable block caching under memory pressure. This version also advanced support in keys for versioning and time-based queries, aligning with evolving needs for temporal data management in distributed systems. In recent developments, version 9.4.0 arrived in June 2024, enhancing direct I/O capabilities for file reads and writes to bypass the , which reduces CPU overhead in high-IOPS environments like NVMe storage. Version 10.7, released in September 2025, revamps with a more efficient threading model, achieving up to 50% CPU reduction in multi-threaded scenarios while preserving compression ratios. As of November 2025, the latest version is 10.7.2. These updates reflect ongoing refinements for modern hardware, such as higher core counts and faster SSDs. RocksDB's development priorities have evolved in response to hardware trends and production demands at , as detailed in retrospective analyses. From 2012 to 2016, the emphasis was on minimizing through innovations like universal compaction, addressing the high cost of writes on flash drives. By 2017, focus shifted to space amplification and CPU efficiency, incorporating advanced and caching to handle growing dataset sizes. From 2017 onward, priorities turned to read optimization and multi-tenant isolation, supporting diverse workloads like and while ensuring scalability in shared environments.

Core Features

Performance Optimizations

RocksDB leverages multi-core processors through parallelized write operations and background compactions executed via configurable thread pools, allowing efficient utilization of systems with 16 or more cores. This design enables multiple threads to handle memtable flushes and compaction tasks concurrently, with reserved threads preventing stalls during write bursts; benchmarks on SSD storage demonstrate up to 10x gains in sustained write throughput compared to single-threaded configurations. By issuing concurrent compaction requests across database instances, RocksDB minimizes bottlenecks in high-throughput workloads, particularly those involving frequent updates. The storage engine incorporates flash-optimized features to reduce and align with SSD characteristics, including leveled compaction strategies that organize data across levels to achieve write amplification typically exceeding 10 while prioritizing read performance. In contrast, tiered () compaction merges files more gradually, lowering write amplification at the expense of increased read amplification and space usage, making it suitable for write-heavy scenarios on flash media. Block sizes are configurable from 4KB (the default, aligned with common SSD page sizes) up to 64KB, enabling users to balance I/O efficiency and metadata overhead for optimal performance on solid-state drives. These mechanisms collectively minimize unnecessary data rewriting and ensure compatibility with high-speed, low-latency storage. RocksDB employs a multi-level caching system centered on an LRU-managed block that stores uncompressed data blocks, compressed blocks, index blocks, and filter blocks to accelerate key-value lookups and scans. The is partitioned into separate pools for uncompressed and compressed data, reducing eviction conflicts and improving hit rates under mixed workloads. For sequential reads, such as during iterations, RocksDB implements automatic prefetching that activates after detecting more than two I/O operations on the same SST file, proactively loading subsequent blocks to mitigate I/O stalls and enhance throughput. Built-in compression support includes algorithms like Snappy, Zstandard, and LZ4, applied at the block level to trade CPU cycles for reduced storage footprint and I/O bandwidth. LZ4 and Snappy offer fast / suitable for performance-critical paths, while Zstandard provides higher ratios for space efficiency in larger datasets. is enabled for compression tasks via multiple threads, allowing simultaneous handling of different blocks to boost throughput on multi-core systems without significantly impacting . As of 2025, recent enhancements include a revamp of in version 10.7, reducing CPU overhead by up to 50% for multi-threaded workloads on SSDs, and unified tracking to cap total usage across instances for better in environments.

Configuration and Tuning

RocksDB offers extensive options to adapt its performance and resource usage to diverse hardware environments and workload patterns, allowing users to fine-tune parameters such as allocation, I/O , and background operations. These settings are primarily managed through the DBOptions and ColumnFamilyOptions structures in the C++ , with equivalents in other language bindings, enabling customization without recompiling the library. A fundamental parameter is the write buffer size, which determines the size of the in-memory memtable before flushing to disk; the default is 64 MB per column family, but it can be increased to 1 GB or more for workloads involving large batch writes to reduce flush frequency and improve throughput. Another critical option is max_open_files, which limits the number of simultaneously open files to respect system file descriptor (FD) constraints in production; setting it to -1 allows all files to remain open for faster access, provided the OS supports high FD limits like 100,000 or more. For hardware-specific tuning, disabling the write-ahead log (WAL) can improve performance in scenarios where is handled externally, eliminating synchronous I/O overhead and leveraging low-latency random writes; this is configured via disableWAL: true in options. On HDDs, optimizations focus on minimizing seeks, such as increasing bits per key from the default 10 to 12-16 bits to achieve false positive rates below 0.1%, thereby reducing unnecessary disk reads during lookups. Advanced configurations include column family options for multi-tenant , where separate column families can be assigned distinct memtables, bloom filters, and compaction styles to prevent between workloads, such as sharding the database into 100 instances for a 10 GB database using compaction to limit space amplification. for background I/O, implemented via NewGenericRateLimiter with a target rate like 10 MB/s, prevents overload by throttling flushes and compactions, ensuring foreground reads maintain low ; the refill period defaults to 100 ms, and fairness ratios prioritize higher-priority operations. The RocksDB tuning wiki provides practical guides for further adjustments, such as reducing compaction threads via max_background_compactions (default 1) to 2-4 on low-core systems to avoid CPU contention, while monitoring statistics like bytes_written and compaction_times to identify bottlenecks. Users are advised to start with defaults, profile under load, and iteratively tune based on metrics, as over-tuning can lead to suboptimal space or I/O amplification.

Architecture

LSM-Tree Foundation

RocksDB employs a (LSM-tree) as its foundational data structure, designed to optimize write-heavy workloads by converting random writes into sequential appends. In this architecture, incoming write operations—such as puts, deletes, and merges—are first appended to an in-memory memtable, which serves as a sorted buffer for efficient key-value storage. By default, RocksDB uses a skiplist implementation for the memtable, though alternatives like hash-linked lists or vector-based structures are configurable for specific access patterns. To ensure durability, writes are also optionally logged to a write-ahead (WAL), a sequential file on disk that enables crash recovery by replaying operations if needed. When the memtable reaches its size limit (typically 64 MB by default), it becomes immutable, and its contents are flushed to disk as a sorted string table (SSTable), forming the basis of persistent storage. The LSM-tree organizes these SSTables into a multi-level hierarchy on disk, with Level 0 (L0) containing the most recent files directly from memtable flushes, which may overlap in key ranges due to concurrent writes. Higher levels, starting from Level 1 (L1) and beyond, consist of larger, non-overlapping SSTables sorted by key ranges, enabling efficient range scans as each level covers disjoint portions of the key space. This leveled structure allows RocksDB to scale storage capacity exponentially across levels, with each subsequent level typically holding about ten times more data than the previous one, balancing memory usage and disk I/O. For reads, RocksDB performs a multi-stage lookup beginning with the active memtable and any immutable memtables, followed by scanning relevant SSTables across levels in reverse order (newest to oldest). To accelerate key lookups and avoid unnecessary file accesses, each SSTable incorporates Bloom filters—probabilistic data structures that quickly determine if a key is absent with a low false-positive rate—and block-based indexes that map key ranges to specific data blocks within the file. Results from these components are merged in sorted order to return the most recent value for the queried key, supporting both point queries and range scans efficiently. This LSM-tree design delivers high write throughput with amortized O(1) per operation, as appends to the memtable and WAL are constant-time sequential writes. However, it incurs trade-offs, including read amplification—where a single query may require scanning multiple SSTables across levels, potentially involving several disk reads—and space amplification, as duplicate or obsolete key versions persist until resolved, leading to on-disk storage exceeding the logical data size by factors like 1.14 or more depending on configuration. These costs are mitigated through optimizations such as Bloom filters, which reduce unnecessary reads, making the structure suitable for workloads prioritizing sustained write performance over immediate read latency.

Compaction and Data Management

RocksDB employs background compaction processes to merge and reorganize data across levels of its (LSM-tree), ensuring efficient storage and query performance by eliminating redundancies and obsolete entries. Compaction operates via dedicated background threads that select and merge overlapping keys from sorted string tables (SSTables) in lower levels into higher ones, with the number of jobs configurable up to a default of one to balance CPU and I/O usage. This mechanism prevents unchecked growth in the number of files, which could otherwise degrade read performance due to increased seek times. The primary compaction types in RocksDB include leveled, , and tiered styles, each tailored to different workload trade-offs. Leveled compaction uses a score-based selection to prioritize levels exceeding their target size, merging data from one level into non-overlapping ranges in the next to minimize read amplification, typically keeping it bounded to around 10 files per query. compaction generates a single output file per level by merging all input sorted runs, which reduces during bursty workloads by avoiding repeated rewrites across multiple levels. Tiered compaction, akin to a queue for log-like data, accumulates multiple sorted runs per level before compacting them into the next, prioritizing low write overhead at the expense of higher space usage, making it suitable for write-heavy, time-series scenarios. In managing the data lifecycle, RocksDB uses tombstones to mark deleted keys, which are propagated through SSTables and removed during compaction once no active snapshots reference the affected data, preventing space leaks from lingering deletions. Snapshot isolation is achieved through monotonically increasing sequence numbers assigned to each write operation; reads under a snapshot only consider keys with sequence numbers at or below the snapshot's value, ensuring a consistent point-in-time view without locking. For backups, checkpointing creates a hard-linked or copied snapshot of the database directory, including manifest and log files, allowing efficient, crash-consistent replicas that can be opened as standalone instances. Compaction addresses key challenges like , which in leveled styles approximates the product of the number of levels and the ratio (typically 10), leading to values often exceeding 10 as data traverses multiple levels. This arises because each compaction rewrites the entire input, multiplying I/O costs for sustained writes. RocksDB mitigates this through dynamic base levels, which adaptively resize intermediate levels based on the total data volume, concentrating about 90% of data in the largest level to reduce unnecessary rewrites in growing databases.

Integrations and Use Cases

As Alternative Storage Backend

RocksDB serves as a pluggable storage engine in various SQL and databases, enabling replacements for traditional engines like or B-tree-based alternatives to achieve superior write throughput and storage efficiency. Its embeddable design allows integration without altering the database's query layer, leveraging RocksDB's (LSM-tree) for optimized handling of high-ingestion workloads. A prominent example is MyRocks, which integrates RocksDB as the storage engine for and , providing up to 2x space savings compared to through advanced compression and reduced . At , MyRocks powers the user database managing tens of petabytes of social graph data as of 2017, demonstrating its scalability in production environments. ArangoDB incorporates RocksDB as its default storage engine for multi-model data, supporting document, graph, and key-value stores with steady insert rates even when datasets exceed available RAM. Key benefits include reduced I/O operations through LSM-tree sequential writes, which minimize compared to structures, and the use of column families to create custom indexes for diverse data types within a single database instance. These features enable operations across families, improving efficiency in mixed-read-write scenarios without requiring separate storage silos. Challenges in adoption involve ensuring transactional guarantees, addressed via RocksDB's write-ahead log (WAL) for crash recovery and consistency, though it introduces overhead in high-throughput settings. layers are necessary to map SQL semantics, such as foreign keys and joins, onto RocksDB's key-value interface, potentially complicating migrations from relational engines.

Embedded and Production Deployments

RocksDB operates as an embedded library, allowing direct integration into applications for local data persistence without the latency and overhead associated with network-based systems. This enables high-performance key-value operations directly within the application's process space, leveraging RocksDB's (LSM-tree) for efficient writes and reads on disks or SSDs. In Streams, RocksDB serves as the default state store, maintaining operator state for tasks and avoiding remote calls for fault tolerance through changelog topics. Similarly, in , the RocksDB State Backend manages large-scale keyed state and operator state off-heap, spilling to disk as needed and supporting incremental checkpoints to durable remote like HDFS or S3 for recovery. In production environments, RocksDB powers metadata management in distributed storage systems. Ceph's BlueStore backend embeds RocksDB to store internal , such as object mappings and placement group logs, on a dedicated DB device (often an SSD) for improved I/O performance over the primary data device. In TiKV, the distributed key-value store underlying , RocksDB instances handle both Raft logs (in a dedicated raftdb) and user data with multi-version (MVCC) across multiple column families in kvdb, enabling transactional consistency at scale. YugabyteDB's DocDB layer customizes RocksDB as its per-tablet storage engine, supporting document-oriented storage with ordered key-value operations for range queries and high-throughput writes in workloads. RocksDB supports large-scale deployments in real-time analytics and cloud services. Twitter's Manhattan key-value store migrated its storage engine to RocksDB to handle high-volume social data transfers, optimizing for stability and performance in a distributed cloud environment. Deployment of RocksDB emphasizes durability and observability for production reliability. To ensure data persistence against crashes, applications configure RocksDB to use fsync on the write-ahead log (WAL), flushing updates to disk synchronously, though this trades off write throughput for stronger guarantees on filesystems like ext4. Monitoring relies on RocksDB's built-in statistics, including Perf Context for CPU-bound operations and IO Stats Context for tracking read/write latencies and throughput, allowing operators to correlate I/O bottlenecks with compaction or memtable activity.

Language Bindings

Official Bindings

The official bindings for RocksDB, maintained by the core development team at , provide high-fidelity access to the library's functionality for C++, , and C, ensuring compatibility with the underlying LSM-tree implementation and performance optimizations. These bindings are distributed as part of the primary RocksDB repository and are designed for direct embedding in applications, supporting features like column families, transactions, and iterators across platforms. The C++ native API forms the foundational interface, exposing the full capabilities of the library through object-oriented classes in the rocksdb namespace. Central to this is the DB class, which handles core operations such as opening databases, performing puts, gets, and deletes on key-value pairs, and managing options like compaction styles and write buffers. Column families are supported via ColumnFamilyHandle, enabling logical partitioning of data for improved read/write efficiency in multi-tenant scenarios. Advanced persistence guarantees are provided by transaction support in TransactionDB, which ensures atomicity and isolation; iterators through Iterator for sequential scans; and snapshots via Snapshot for consistent point-in-time reads without blocking writers. This API is optimized for high-throughput workloads, with thread-safe operations and configurable bloom filters for reducing disk I/O. RocksDB's Java binding utilizes JNI to wrap the C++ core, offering a idiomatic interface for JVM-based environments, including Android applications and enterprise servers. Key classes include RocksDB for database instantiation and basic CRUD operations, RocksIterator for efficient key iteration with seek and next/prev methods, and WriteBatch for batching multiple mutations to minimize WAL overhead and improve throughput. Additional utilities like ColumnFamilyHandle and Transaction mirror their C++ counterparts, supporting optimistic concurrency control and multi-threaded access while integrating with Java's garbage collection and exception model. This binding is particularly valued for its low-latency performance in big data frameworks, where it serves as a storage backend for processing pipelines. A procedural C binding complements the object-oriented APIs, defined in rocksdb/c.h, to enable integration with systems languages or FFI mechanisms. It provides functions such as rocksdb_open for initializing a database instance, rocksdb_put and rocksdb_get for key-value manipulations, rocksdb_writebatch_create for batched operations, and rocksdb_iterator_create for traversal, all returning error codes for robust error handling. This interface abstracts the C++ complexity while preserving performance, making it suitable for embedded use cases or as a bridge to other runtimes. Official bindings maintain close version alignment with the core library releases to ensure feature parity and ; for example, the binding supports RocksDB 10.4.2 (as of August 2025), including enhancements to parallel compaction and . Builds are orchestrated via , supporting cross-platform compilation on , Windows, macOS, and architectures, with options to enable or disable specific features like shared libraries or static linking for deployment flexibility.

Third-Party Bindings

RocksDB's third-party bindings extend its usability beyond the official C++, C, and Java APIs, enabling integration with a wide array of programming languages through community-driven efforts. These bindings typically wrap the core C API, providing idiomatic interfaces for tasks like key-value operations, batch writes, and iterator management while preserving RocksDB's performance characteristics. Maintained in the official repository's documentation, the list of known third-party bindings highlights the active ecosystem supporting RocksDB in non-native environments. Prominent third-party bindings include those for Go, which facilitate high-performance applications in concurrent systems. For instance, grocksdb offers a robust wrapper emphasizing safety and efficiency, supporting features like column families and custom comparators, and is actively maintained for modern Go versions. Similarly, the earlier gorocksdb binding, though now unmaintained, influenced subsequent implementations and remains available for legacy use. These Go bindings are particularly valued in distributed systems and where RocksDB serves as an embedded store. In the Rust ecosystem, bindings such as provide safe, memory-managed access to RocksDB, leveraging 's ownership model to prevent common errors in database interactions. Developed by PingCAP, this binding supports advanced features like snapshots and write batches, making it suitable for reliable storage in . Another variant, spacejam's rust-rocksdb, focuses on low-level control and is used in performance-critical applications. Rust bindings underscore RocksDB's appeal for building secure, high-throughput data layers in systems like databases and blockchain nodes. For and environments, the rocksdb package delivers asynchronous bindings optimized for server-side applications, allowing seamless embedding in event-driven architectures. It supports promises and streams for operations, enabling RocksDB usage in web-scale without blocking the event loop. This binding is widely adopted in full-stack JavaScript projects requiring persistent storage. Other notable third-party bindings cover dynamic languages and specialized domains:
LanguageBinding NameRepositoryNotes
RocksDicthttps://github.com/rocksdict/RocksDictActively maintained alternative to older unmaintained wrappers like python-rocksdb.
PerlRocksDBhttps://metacpan.org/pod/RocksDBProvides Perl-specific iterators and error handling.
Rubyrocksdb-rubyhttp://rubygems.org/gems/rocksdb-rubyGem for and scripting integrations.
rocksdb-phphttps://github.com/Photonios/rocksdb-phpSupports PHP 7+ with focus on web application storage.
C#rocksdb-sharphttps://github.com/warrenfalk/rocksdb-sharp.NET wrapper with async support; another fork by curiosity-ai exists for extensions.
rocksdb-haskellhttps://hackage.haskell.org/package/rocksdb-haskellFunctional-style interface for pure Haskell projects.
Drocksdbhttps://github.com/b1naryth1ef/rocksdbLow-level binding for the D language.
Erlangerlang-rocksdbhttps://gitlab.com/barrel-db/erlang-rocksdbTailored for distributed, fault-tolerant systems.
roxhttps://github.com/urbint/roxBuilds on Erlang VM for concurrent Elixir apps.
Nimnim-rocksdbhttps://github.com/status-im/nim-rocksdbEfficient binding for Nim's compiled performance.
/Objective-CObjectiveRockshttps://github.com/iabudiab/ObjectiveRocksiOS/macOS integration with Swift compatibility.
These bindings, while varying in maturity and maintenance status, demonstrate RocksDB's versatility across paradigms, from in Haskell to concurrent processing in Erlang and . Community contributions ensure ongoing updates, with unmaintained projects often serving as references for new developments. Developers are encouraged to check the official list for the latest additions and compatibility with RocksDB releases.

References

  1. [1]
    RocksDB Overview - GitHub
    Jul 18, 2023 · RocksDB is a storage engine library of key-value store interface where keys and values are arbitrary byte streams. RocksDB organizes all data in ...
  2. [2]
    Getting started
    ### Summary of RocksDB from https://rocksdb.org/docs/getting-started.html
  3. [3]
    facebook/rocksdb: A library that provides an embeddable ... - GitHub
    RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and ...
  4. [4]
  5. [5]
  6. [6]
    RocksDB: Evolution of Development Priorities in a Key-value Store ...
    Oct 15, 2021 · It targets in-memory caching applications. Being able to configure the type of compaction allows RocksDB to serve a wide range of use cases. By ...
  7. [7]
    RocksDB Users and Use Cases - GitHub
    Jul 3, 2022 · At Facebook, we use RocksDB as storage engines in multiple data management services and a backend for many different stateful services.Missing: target | Show results with:target
  8. [8]
    [PDF] Characterizing, Modeling, and Benchmarking RocksDB Key-Value ...
    Feb 25, 2020 · These three use cases are typi- cal examples of how KV-stores are used: 1) as the storage engine of a SQL database, 2) as the storage engine of ...
  9. [9]
    Under the Hood: Building and open-sourcing RocksDB
    Nov 21, 2013 · RocksDB software can fully utilize the IOPS offered by flash storage, making it perform faster than LevelDB across random read, write, and ...Missing: fork 2012
  10. [10]
    RocksDB: Evolution of Development Priorities in a Key-value Store ...
    Bloom filters are used to eliminate most unnecessary searches within SSTables. RocksDB supports multiple different types of compaction [23]. Leveled Compaction ...
  11. [11]
    The History of RocksDB
    Nov 24, 2013 · The short story is that the HDFS/HBase of 2012 had a few software bottlenecks because of which it was not able to use flash storage efficiently.
  12. [12]
    Dhruba Borthakur, Facebook - ODBMS.org
    Jul 17, 2014 · Dhruba Borthakur is an engineer in the Database Engineering Team at Facebook. He has been instrumental in the design and architecture of Rocksdb.Missing: creation | Show results with:creation
  13. [13]
    Facebook MyRocks at MariaDB
    Dec 7, 2016 · Can you tell us a bit about the history of RocksDB at Facebook? In 2012, we started to build an embedded storage engine optimized for flash- ...Missing: origins | Show results with:origins
  14. [14]
    RocksDB 3.0 release
    Check out new RocksDB release on Github! New features in RocksDB 3.0: Column Family support · Ability to chose different checksum function.Missing: notes | Show results with:notes
  15. [15]
    New Bloom Filter Format - RocksDB
    a new bloom filter format for block based table. This could bring about 40% of ...
  16. [16]
    Parallel Compression Revamp: Dramatically Reduced CPU Overhead
    Oct 8, 2025 · The upcoming RocksDB 10.7 release includes a major revamp of parallel compression that dramatically reduces the feature's CPU overhead by up ...
  17. [17]
    RocksDB: Evolution of Development Priorities in a Key-value Store ...
    This article is an eight-year retrospective on development priorities for RocksDB, a key-value store developed at Facebook that targets large-scale distributed ...Missing: MyRocks origins 2012 motivations
  18. [18]
    Leveled Compaction · facebook/rocksdb Wiki - GitHub
    Nov 13, 2023 · Multiple compactions can be executed in parallel if needed: Maximum number of compactions allowed is controlled by max_background_compactions .Missing: core | Show results with:core
  19. [19]
    Universal Compaction · facebook/rocksdb Wiki - GitHub
    Jun 29, 2023 · Universal Compaction Style is a compaction style, targeting the use cases requiring lower write amplification, trading off read amplification and space ...
  20. [20]
    RocksDB Tuning Guide - GitHub
    Mar 28, 2023 · The purpose of this guide is to provide you with enough information so you can tune RocksDB for your workload and your system configuration.
  21. [21]
    Block Cache · facebook/rocksdb Wiki - GitHub
    Apr 4, 2023 · Block cache is where RocksDB caches data in memory for reads. User can pass in a Cache object to a RocksDB instance with a desired capacity (size).
  22. [22]
    Iterator · facebook/rocksdb Wiki - GitHub
    Read-ahead. RocksDB does automatic readahead and prefetches data on noticing more than 2 IOs for the same table file during iteration. This applies ...
  23. [23]
    RocksDB FAQ - GitHub
    A: All the releases in https://github.com/facebook/rocksdb/releases are stable. For RocksJava, stable releases are available in https://oss.sonatype.org/#nexus ...Missing: history | Show results with:history<|control11|><|separator|>
  24. [24]
  25. [25]
    Setup Options and Basic Tuning · facebook/rocksdb Wiki - GitHub
    Nov 1, 2022 · Besides writing code using Basic Operations on RocksDB, you may also be interested in how to tune RocksDB to achieve desired performance.
  26. [26]
    Questions related to use Rocksdb as SSD cache #1014 - GitHub
    Feb 24, 2016 · The WAL is configurable now. If the use case does not do cache invalidation, we would disable the WAL. Otherwise, we need to enable the WAL ...
  27. [27]
    Rate Limiter
    ### Summary of Rate Limiting in RocksDB
  28. [28]
    Statistics · facebook/rocksdb Wiki - GitHub
    DB Statistics provides cumulative stats over time. It serves different function from DB properties and perf and IO Stats context.
  29. [29]
    The log-structured merge-tree (LSM-tree) - ACM Digital Library
    The LSM-tree uses an algorithm that defers and batches index changes, cascading the changes from a memory-based component through one or more disk components in ...
  30. [30]
    MemTable
    ### Summary of MemTable Structure in RocksDB
  31. [31]
    Rocksdb BlockBasedTable Format
    ### Summary of SSTables Using Bloom Filters and Indexes for Key Lookups in RocksDB BlockBasedTable Format
  32. [32]
    Compaction · facebook/rocksdb Wiki - GitHub
    With subcompactions this enables multi-threaded compaction of the largest sorted runs. Note that RocksDB used the name universal rather than tiered. Tiered ...
  33. [33]
  34. [34]
    Universal Compaction
    ### Summary of Universal Compaction in RocksDB
  35. [35]
    Snapshot · facebook/rocksdb Wiki - GitHub
    A snapshot is associated with an internal sequence number assigned by RocksDB when taking the snapshot without application intervention. By taking a snapshot, ...Missing: isolation | Show results with:isolation
  36. [36]
    Use Checkpoints for Efficient Snapshots
    ### Summary of Checkpointing in RocksDB for Backups
  37. [37]
    [PDF] Constructing and Analyzing the LSM Compaction Design Space
    Aug 10, 2021 · By default, RocksDB supports only two different data layouts: hybrid leveling (tiered first level, leveled otherwise) [48] and a variation of ...
  38. [38]
    MyRocks: A space- and write-optimized MySQL database
    Aug 31, 2016 · With MyRocks, we can use RocksDB as backend storage and still benefit from all the features in MySQL. ... MyRocks integrates RocksDB as a new ...Mysql & Innodb · Myrocks: A Rocksdb Storage... · Benchmarks
  39. [39]
    Migrating from InnoDB and HBase to MyRocks at Facebook | PPTX
    Migrating large databases at Facebook from InnoDB to MyRocks and HBase to MyRocks resulted in significant space savings of 2-4x and improved write ...
  40. [40]
    MyRocks: LSM-Tree Database Storage Engine Serving Facebook's ...
    MyRocks, a new MySQL storage engine, was built on top of RocksDB by adding relational capabilities. With MyRocks, using the RocksDB API, significant efficiency ...Missing: origins 2012
  41. [41]
    Instagram Supercharges Cassandra with a Pluggable RocksDB ...
    Mar 5, 2018 · Instagram engineers replaced the storage engine of this Java-based distributed open source database with a faster C++-based one from another database, RocksDB.
  42. [42]
    RocksDB, Pregel, Foxx, and More - ArangoDB 3.2
    Jul 20, 2017 · With the integration of Facebook's RocksDB, as a first pluggable storage engine in our architecture, users can now work with as much data as ...
  43. [43]
    Write Ahead Log (WAL) · facebook/rocksdb Wiki - GitHub
    Mar 28, 2023 · In the default configuration, RocksDB guarantees process crash consistency by flushing the WAL after every user write.
  44. [44]
    Transactions · facebook/rocksdb Wiki - GitHub
    RocksDB supports both pessimistic and optimistic concurrency control. Note that RocksDB provides Atomicity by default when writing multiple keys via WriteBatch ...Optimistictransactiondb · Setting A Snapshot · Tuning / Memory Usage
  45. [45]
    Performance Tuning RocksDB for Kafka Streams' State Stores
    Mar 10, 2021 · This blog post will cover key concepts that show how Kafka Streams uses RocksDB to maintain its state and how you can tune RocksDB for Kafka Streams' state ...Kafka Streams Basics · Rocksdb Architecture... · How To Tune Rocksdb To...
  46. [46]
    Using RocksDB State Backend in Apache Flink: When and How
    Jan 18, 2021 · RocksDB is a fast, embeddable key-value store for Flink state, useful for large state, incremental checkpointing, and predictable latency. It's ...State In Flink · Rocksdb In Flink · Tuning Rocksdb
  47. [47]
    BlueStore Configuration Reference - Ceph Documentation
    BlueStore (or more precisely, the embedded RocksDB) will put as much metadata as it can on the DB device in order to improve performance. If the DB device ...
  48. [48]
    RocksDB Overview - TiDB Docs
    RocksDB is an LSM-tree storage engine that provides key-value store and read-write functions. It is developed by Facebook and based on LevelDB.
  49. [49]
    DocDB storage layer - Yugabyte Docs
    DocDB is the underlying document storage engine of YugabyteDB and is built on top of a highly customized and optimized version of RocksDB.
  50. [50]
    How We Use RocksDB at Rockset - Medium
    Sep 5, 2019 · In this blog, I'll describe how we use RocksDB at Rockset and how we tuned it to get the most performance out of it.
  51. [51]
    Data transfer in Manhattan using RocksDB - Blog - X
    Feb 28, 2022 · In this blog post, we talk about a performance and stability problem we encountered while migrating Manhattan's storage engine to RocksDB ...
  52. [52]
  53. [53]
    Perf Context and IO Stats Context · facebook/rocksdb Wiki - GitHub
    Jan 1, 2023 · Perf Context and IO Stats Context use the same mechanism. The only difference is that Perf Context measures functions of RocksDB, whereas IO Stats Context ...
  54. [54]
    rocksdb/include/rocksdb/db.h at main · facebook/rocksdb
    Insufficient relevant content. The provided text is a GitHub page navigation and metadata, not the actual content of `rocksdb/db.h`. It lacks details about key classes and features like DB, ColumnFamilyHandle, transactions, iterators, and snapshots.
  55. [55]
    rocksdb/java at main · facebook/rocksdb
    **Summary of Java Binding for RocksDB:**
  56. [56]
    org.rocksdb » rocksdbjni - Maven Repository
    Version ▽, Vulnerabilities, Repository, Usages · Date. 10.4.x. 10.4.2 · Central · 3. Aug 20, 2025. 10.2.x. 10.2.1 · Central · 23. May 13, 2025. 10.1.x. 10.1.3 ...
  57. [57]
    Releases · facebook/rocksdb - GitHub
    RocksDB now triggers eligible compactions every 12 hours when periodic compaction is configured. This solves a limitation of the compaction trigger mechanism, ...
  58. [58]
    rocksdb/LANGUAGE-BINDINGS.md at main · facebook/rocksdb
    Insufficient relevant content. The provided text is a partial GitHub page snippet that does not contain the full content of the `LANGUAGE-BINDINGS.md` file. It includes navigation, metadata, and footer information but lacks the actual list of third-party language bindings for RocksDB.
  59. [59]
    linxGnu/grocksdb: RocksDB wrapper for Go. Support 10.x ... - GitHub
    Build. After installing both rocksdb and grocksdb , you can build your app using the following commands:.
  60. [60]