Fact-checked by Grok 2 weeks ago

Redis

Redis is an open-source, in-memory store that functions as a distributed, in-memory key-value database, , and , supporting advanced data types and real-time operations with high performance. Originally developed by Salvatore Sanfilippo in 2009 as the Remote Dictionary Server, Redis is written in and licensed under a dual license consisting of the Redis Source Available License v2 (RSALv2) and the v1 (SSPLv1) starting from version 7.4, with the GNU Affero License v3 (AGPLv3) added as an additional option from version 8.0 onward. It excels in scenarios requiring low-latency data access, such as session storage, real-time analytics, leaderboards, and geospatial applications, due to its ability to persist data optionally to disk while maintaining all active data in RAM for sub-millisecond response times. Redis supports a rich set of native data structures beyond simple key-value pairs, including strings, hashes, lists, sets, sorted sets, bitmaps, , geospatial indexes, , and JSON documents, enabling atomic operations and complex querying without external processing. These structures allow developers to model application data efficiently, such as using sorted sets for priority queues or streams for event sourcing and pub/sub messaging. Additional capabilities include Lua scripting for custom server-side logic, replication for high availability, clustering for horizontal scaling, and modules like RedisJSON, RediSearch, and RedisGraph for extended functionality in JSON handling, full-text search, and graph databases. Since its inception, Redis has evolved from a caching solution into a multi-model platform, powering applications at companies like , , and for tasks ranging from content caching to real-time recommendation engines. Maintained initially by Sanfilippo until 2020, after which the project transitioned under Redis Inc. (formerly Redis Labs); Sanfilippo rejoined the company in December 2024, which sponsors ongoing development while offering enterprise editions with enhanced security, monitoring, and cloud deployment options. As of 2025, Redis 8 introduces performance optimizations yielding up to 87% faster command execution and support for AI workloads like vector similarity search, solidifying its role in modern data platforms.

Overview

Definition and Purpose

Redis is an open-source, in-memory key-value data store that functions as a database, supporting a variety of rich data structures to enable high-performance read and write operations. Developed by Salvatore Sanfilippo, it operates as a remote dictionary server, allowing data to be stored and retrieved efficiently in for applications requiring low-latency access. Redis primarily serves as a to accelerate data retrieval and reduce load on backend systems, a primary database for real-time applications such as leaderboards or session stores, and a for implementing publish-subscribe messaging patterns. Its versatility stems from this multi-purpose design, making it suitable for scenarios where speed and simplicity are paramount over complex querying. Originally created to handle high-load web applications like the real-time analytics service LLOOGG, Redis emphasizes in-memory storage for superior speed compared to traditional disk-based in . As a non-relational system, it differs from relational by focusing on key-value semantics rather than structured tables and joins. While options exist for data durability, the core architecture prioritizes volatile, high-speed operations.

Design Principles

Redis operates primarily as an in-memory data store, maintaining all data structures in RAM to deliver sub-millisecond latency for operations, often achieving response times in the microsecond range. This design choice bypasses the slower disk access latencies inherent in traditional databases, enabling Redis to handle hundreds of thousands of operations per second on commodity hardware. Optional persistence features allow data to be periodically saved to disk, providing durability without impacting the core in-memory performance during normal operation. Central to Redis's architecture is its single-threaded , which processes client requests sequentially using non-blocking I/O mechanisms such as epoll on or kqueue on BSD and macOS systems. This approach multiplexes multiple connections efficiently within one thread, eliminating the synchronization overhead, lock contention, and context-switching costs associated with multi-threaded models. By leveraging the operating system's event notification facilities, the event loop scales to manage tens of thousands of concurrent clients while keeping the codebase simple and predictable. The single-threaded model inherently ensures that individual commands execute atomically, as no other operations can interrupt or interleave with the current one, guaranteeing across multi-client accesses without additional locking primitives. This atomicity extends to complex manipulations, such as incrementing counters or pushing elements to , providing reliable behavior in concurrent environments. For grouped operations requiring stronger guarantees, Redis supports transactional blocks via the MULTI/EXEC commands, which queue and execute commands as a single atomic unit. To optimize memory efficiency, Redis employs compact internal representations for its data structures, particularly for smaller datasets. Structures like , sets, and hashes use specialized encodings, such as listpacks (which superseded ziplists), which store elements in a serialized, contiguous block with variable-length fields to reduce pointer overhead and fragmentation. This approach can significantly lower usage—for instance, small consume far less space than pointer-linked nodes—allowing Redis to store larger datasets within limited while preserving fast access times. These principles reflect deliberate trade-offs favoring performance and developer simplicity over strict properties, embracing in distributed configurations to prioritize and throughput. In standalone mode, Redis delivers immediate , but replication introduces asynchronous updates that may temporarily diverge across nodes before converging. This design aligns with use cases like caching and real-time , where low-latency reads outweigh the need for full transactional in every scenario.

History

Origins and Early Development

Redis was developed in 2009 by Italian software engineer Salvatore Sanfilippo, known online as antirez, to address performance bottlenecks in his startup based in . At the time, Sanfilippo was building a log analyzer to handle high-velocity data for tracking website statistics, but existing databases and caching solutions proved inadequate for the required speed and in processing events. This practical need drove the creation of a lightweight, in-memory data store optimized for fast read and write operations. The project debuted publicly with its initial release in February 2009, followed by the stable version 1.0 in 2010, implemented in as a networked server supporting basic dictionary-like key-value operations. Licensed under the three-clause BSD license from the outset, Redis emphasized simplicity and portability, compiling with minimal dependencies to run on various platforms. Sanfilippo announced the project on on February 26, 2009, where it received early feedback that helped refine its core features. Redis quickly gained traction in web development circles for its use as a caching layer in dynamic applications, particularly appealing to developers after Ezra Zygmuntowicz released the first client library in 2009. Open-source contributors began enhancing its capabilities, fostering a growing ecosystem around its efficient handling of strings, lists, and sets. Early adopters included platforms like and , which integrated it for needs, solidifying its reputation for high performance in production environments. To support ongoing development, Garantia Data—later rebranded as Redis Labs and now Redis Inc.—was founded in 2011 and started contributing to the project soon after. By 2015, the company formalized sponsorship, enabling Sanfilippo to join as lead maintainer and ensuring sustained professional oversight for the open-source effort.

Major Releases and Milestones

Redis 2.0, released in September 2010, marked a significant advancement by introducing robust persistence mechanisms, including RDB snapshotting for point-in-time backups, and master-slave replication to enable and across multiple instances. These features addressed key limitations in earlier versions, allowing Redis to transition from a primarily in-memory to a more durable suitable for production environments. In April 2015, Redis 3.0 launched with the introduction of Redis Cluster, a native sharding mechanism that distributes data across multiple nodes for horizontal scaling while maintaining automatic partitioning and capabilities. This release also included performance optimizations, such as an improved LRU eviction algorithm and new object encodings to reduce memory usage and cache misses. Redis 6.0, released in May 2020, enhanced security through the introduction of Access Control Lists (ACLs), which provide fine-grained permissions for users, commands, and keys, replacing the simpler password-based of prior versions. Additionally, it debuted the RESP3 protocol, an evolution of the RESP2 that supports richer data types like maps, sets, and attributes, improving with modern clients. The release also incorporated multi-threaded I/O for handling network operations, boosting throughput on multi-core systems without altering the single-threaded command execution model. Released in April 2022, Redis 7.0 focused on performance refinements and reliability enhancements, including a multi-part File (AOF) persistence format that splits logs into base and incremental files for better recovery efficiency and reduced rewrite overhead. It also improved replication synchronization and added client-side caching support in Redis Enterprise integrations, contributing to overall system scalability. Redis 8.0, achieving general availability in May 2025, introduced the Vector Set data structure in beta, enabling efficient storage and similarity search for high-dimensional vectors critical to AI and machine learning applications like semantic search and recommendation systems. The release incorporated over 30 performance optimizations, achieving up to 87% faster command execution and 2x higher throughput in certain workloads, alongside AI-specific features such as enhanced JSON support and query engine integrations. In November 2025, the first release candidate (RC1) for Redis 8.4 was issued, representing a feature-complete pre-release version with targeted stability fixes, minor command enhancements, and preparations for full production deployment. On the corporate front, Redis Labs rebranded to Redis Inc. in August 2021, reflecting the project's maturation into a comprehensive platform and the company's expanded role beyond open-source maintenance. Earlier, in July 2020, Redis creator Salvatore Sanfilippo (antirez) announced his departure as project maintainer after 11 years, shifting to an advisory role at Redis Labs to focus on family and new ventures while ensuring a smooth transition for the community. In December 2024, Sanfilippo rejoined Redis Inc. as a contributor and community liaison, helping to bridge the company and open-source community while working on new features such as the Vector Set .

Data Model

Supported Data Structures

Redis supports a variety of native data structures beyond basic key-value pairs, enabling efficient storage and manipulation of complex types in memory. These structures include strings, lists, sets, sorted sets, and hashes as core types, along with specialized extensions such as bitmaps, , geospatial indexes, streams, and, as of Redis 8 (2025), , , sets, and additional probabilistic structures including Bloom filters, filters, Count-Min sketches, T-Digests, and Top-K filters. Each type is designed for specific use cases, leveraging atomic operations for concurrency safety, though detailed command syntax is covered elsewhere. Strings in Redis are binary-safe sequences of bytes that can hold text, numbers, serialized objects, or raw binary data, with a maximum size of 512 MB per value. They serve as the foundational data type for simple caching, counters (via atomic increments), and storing serialized formats like JSON or images. For example, strings can represent user sessions or configuration values, supporting operations like appending, substring extraction, and setting expiration times. Lists are implemented as doubly linked lists of strings, allowing efficient insertion and removal from both ends to function as stacks (L PUSH/LPOP) or queues (LPUSH/RPOP). They are commonly used for task queues, recent items lists, or in worker systems, with support for blocking pops to wait for new elements. Up to 2^32 - 1 elements can be stored, making them suitable for ordered collections where order of insertion matters. Sets provide unordered collections of unique strings, ensuring no duplicates and enabling fast membership checks in O(1) average time. They are ideal for storing tags, unique visitors, or performing set operations like unions, intersections, and differences across multiple sets. For efficiency, small sets use sets when all members are , while larger ones employ tables; random member selection and queries further enhance their utility for probabilistic sampling. Sorted sets, also known as ZSETs, maintain unique strings associated with floating-point scores for ordering, allowing range queries, ranked retrieval, and leaderboards. They combine a for O(1) member lookups with a for ordered operations like adding, removing, or fetching elements by rank or score range, achieving O(log N) for insertions and deletions. This structure excels in scenarios requiring sorted, weighted collections, such as priority queues or time-series indexing. Hashes store field-value pairs as a , mimicking object representations with keys and values, and are optimized for grouping related data like user profiles or shopping carts. For small hashes (under configurable thresholds like 512 bytes), a compact ziplist encoding is used to reduce memory overhead, switching to a for larger ones to maintain performance. Operations allow atomic updates to individual fields, making hashes efficient for partial object modifications without rewriting entire structures. Bitmaps extend the string type with bit-level operations, treating strings as bit vectors for compact storage of states or analytics flags, such as user activity tracking over time. Commands like SETBIT and GETBIT enable setting, querying, and counting bits at specific offsets, while bitwise operations (, XOR) on multiple bitmaps support aggregation for up to 2^32 bits efficiently. This structure is valuable for memory-efficient bloom filters or pixel-level image processing. HyperLogLog is a probabilistic for approximating the (unique count) of large sets with minimal , using about 12 per regardless of set size. It adds elements via PFADD and estimates counts with PFCOUNT, offering 0.81% for most use cases like website unique visitor tracking, where exact counts are unnecessary. Merging multiple HyperLogLogs via PFMERGE enables distributed estimation across shards. Geospatial indexes store latitude-longitude pairs as sorted sets with GeoHash-encoded scores for efficient proximity searches, such as finding nearby locations within a . Added via GEOADD, they support GEO RADIUS queries for distance calculations using and return results with sorted distances or in format. This enables applications like location-based services or ride-sharing matching, with indexes built on sorted set internals for O(log N + M) query time, where M is output size. Streams provide an append-only log of entries, each with timestamped fields, functioning as a time-series database or for event sourcing. Producers append via XADD (auto-generating unique IDs), while consumers use XPENDING and XREADGROUP for blocking reads, acknowledgments, and consumer groups akin to Kafka partitions. Streams support range queries by ID or time, trimming old entries, and are suited for reliable queuing, audit logs, or real-time analytics with exactly-once semantics. JSON, introduced as a native data type in Redis 8, allows storage and manipulation of structured documents with path-based querying and updates, supporting operations like insertion, deletion, and numeric computations on JSON values for efficient handling of . Time series, native since Redis 8, store timestamped samples with optional labels and retention policies, enabling aggregation, downsampling, and querying for metrics and sensor data in or monitoring applications. Vector sets, added in Redis 8 (beta), manage collections of high-dimensional vectors with support for similarity searches using metrics like cosine or , facilitating applications such as and recommendation systems. Additional probabilistic structures, integrated natively in Redis 8, provide approximate computations with low memory: Bloom and Cuckoo filters for membership testing, Count-Min sketches for frequency estimation, T-Digests for quantile approximation, and Top-K for frequent item identification, extending beyond for diverse statistical tasks.

Key-Value Storage Mechanics

Redis stores data as key-value pairs, where keys are binary-safe strings that can hold up to 512 megabytes of arbitrary data. These keys serve as unique identifiers within a Redis instance, enabling efficient retrieval and manipulation of associated values. While keys can theoretically be lengthy, practical usage favors concise names to optimize and performance, often employing conventions like colon-separated hierarchies for organization, though no strict naming rules are enforced beyond the size limit. Keys in Redis can be optionally namespaced across multiple logical databases, with the default configuration supporting 16 databases numbered from 0 to 15. The SELECT command allows clients to switch between these databases, providing isolation for different applications or environments within a single instance. However, using multiple databases is generally discouraged in production setups, particularly with , which exclusively supports database 0 to simplify scaling and avoid cross-database operations. To manage temporary data, Redis implements key expiration through a time-to-live () mechanism, where keys are automatically deleted after a specified duration. Expiration occurs via two complementary algorithms: passive expiration, which checks and removes timed-out keys during client access attempts (such as reads or writes), and active expiration, a background process that periodically scans a subset of keys with expirations to proactively delete those past their TTL. This hybrid approach balances low-latency access with eventual cleanup, ensuring expired keys do not persist indefinitely while minimizing overhead on normal operations. When memory usage approaches limits, Redis employs configurable policies to prevent out-of-memory errors and maintain stability. The maxmemory directive sets the maximum memory allocation for the dataset, defaulting to 0 for unlimited usage until system constraints are hit. Upon reaching this threshold during write operations, Redis activates the selected policy, defined via the maxmemory-policy configuration, to remove keys and free space. Available policies include:
  • noeviction: Rejects new writes, returning errors to preserve existing data.
  • allkeys-lru: Removes the least recently used keys across all keys, approximating LRU via sampling for efficiency.
  • allkeys-lfu: Evicts the keys from the entire keyspace.
  • allkeys-random: Randomly selects keys for removal.
  • volatile-lru: Applies LRU eviction only to keys with set TTLs.
  • volatile-lfu: Targets keys with TTLs.
  • volatile-random: Randomly evicts keys that have TTLs.
  • volatile-ttl: Prioritizes keys with the shortest remaining among those with expirations.
These policies allow tailored memory management, with allkeys-lru recommended for most caching scenarios to retain recently accessed data. Eviction is handled master-side in replicated setups, ensuring consistency without replicas independently applying policies unless promoted.

Core Functionality

Basic Operations and Commands

Redis provides a set of fundamental commands for creating, reading, updating, and deleting (CRUD) data across its supported structures, enabling efficient in-memory storage and retrieval. For strings, the simplest , the SET command stores a value at a specified , overwriting any existing value, while the GET command retrieves the value associated with a , returning nil if the key does not exist. Hashes, which store field-value pairs, use HSET to set one or more fields to their values in the hash at a , overwriting existing fields, and HGET to return the value of a specific field, or nil if the field is absent. Lists support ordered collections where LPUSH inserts one or more values at the head of the list, and LPOP removes and returns the first element from the head, with the list being deleted if emptied. Query patterns in Redis allow for targeted retrieval from various structures. For sorted sets, which maintain elements sorted by score, ZRANGE returns a range of elements by index, score, or lexicographical order, optionally including scores with the WITHSCORES option; as of Redis 6.2.0, it supports versatile range queries including BYSCORE for score-based filtering. Sets, unordered collections of unique elements, use SMEMBERS to return all members of the set at a key, equivalent to intersecting the set with itself. Geospatial indexes, built on sorted sets using and as scores, employ GEOSEARCH to query members within a circular or rectangular bounding from a given coordinate or member, supporting options like WITHDIST to include distances and COUNT to limit results. Transactions in Redis ensure atomic execution of command sequences, preventing partial updates. The MULTI command initiates a transaction block, queuing subsequent commands for execution, while EXEC commits them ally, returning results or nil if the transaction was discarded; if keys watched by WATCH are modified before EXEC, the transaction aborts to support optimistic locking. Scripting extends basic operations by allowing custom server-side logic via scripts executed atomically. The EVAL command runs a script using the provided keys and arguments, with the script accessing Redis commands through a sandboxed redis.call() or redis.pcall() , ensuring no other commands interleave during execution. Client-server communication in Redis relies on the Redis Serialization Protocol (RESP), a binary-safe, human-readable that prefixes with type indicators and lengths for efficient of requests and responses. RESP3, introduced as an opt-in enhancement in Redis 6.0, adds support for richer types like maps, sets, and booleans, improving compatibility with modern clients while maintaining with RESP2. For example, a basic operation might look like:
SET mykey "Hello, Redis!"
GET mykey
This sets the key mykey to the value and retrieves it, returning "Hello, Redis!". Similarly, for a :
HSET user:1000 name "John" age "30"
[HGET](/page/Get) user:1000 name
This sets fields in the and retrieves the name field, yielding "John". A list push and pop:
LPUSH tasks "write report"
LPOP tasks
Adds "write report" to the list head and removes it, returning the value. For a sorted set range query:
ZADD leaderboard 100 "player1" 200 "player2"
ZRANGE leaderboard 0 -1 WITHSCORES
This adds scored members and retrieves all in order with scores: 1) "player1" 2) "100" 3) "player2" 4) "200". A set membership query:
SADD fruits "apple" "banana"
SMEMBERS fruits
Adds unique elements and returns 1) "apple" 2) "banana". Geospatial search example:
GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
GEOSEARCH Sicily FROM lonlat 15 37 BYRADIUS 200 km WITHDIST
Returns members within 200 km of (15, 37), such as 1) 1) "Palermo" 2) "116.397". A transaction with optimistic locking:
WATCH counter
MULTI
INCR counter
SET status "updated"
EXEC
This queues increments and sets, executing only if counter remains unchanged. A simple Lua script via EVAL:
EVAL "return redis.call('SET', KEYS[1], ARGV[1])" 1 mykey "value"
Sets the key using script arguments, returning "OK".

Publish-Subscribe Messaging

Redis implements a publish-subscribe (pub/sub) messaging system that enables asynchronous communication between publishers and subscribers through named s. Publishers send messages to specific channels using the PUBLISH command, while subscribers register interest in those channels via the SUBSCRIBE command to receive messages in real-time. This decouples senders from receivers, allowing multiple subscribers to listen to the same channel without knowledge of each other. For more flexible subscriptions, Redis supports pattern-based matching with the PSUBSCRIBE command, which uses glob-style s (e.g., "news.*" to match s like "news.sports" or "news.tech"). Messages published to any matching the are delivered to the subscribing client, enabling dynamic topic-based messaging without enumerating every individually. Active s, defined as those with at least one subscriber (excluding pattern subscribers), can be listed using the PUBSUB CHANNELS command for monitoring purposes. Message delivery in Redis pub/sub follows semantics with at-most-once guarantees, meaning messages are broadcast immediately to connected subscribers but are not persisted or queued. If a subscriber is disconnected or the connection fails during delivery, the message is lost without retry or acknowledgment mechanisms, prioritizing low-latency over reliability. This ephemeral nature suits real-time applications like live updates but requires careful handling of network issues by clients. Since Redis 5.0, the Streams data type complements pub/sub by providing ordered, persistent messaging with consumer groups, allowing for more robust pub/sub-like patterns where messages are appended to a stream and consumed reliably without duplication. Unlike traditional pub/sub, Streams support replayability and load balancing across consumers, integrating seamlessly for scenarios needing durability alongside real-time delivery. In single-node deployments, pub/sub scales well for moderate loads due to its in-memory efficiency, but in clustered environments, messages are broadcast across all nodes, constraining throughput to the cluster's bisection bandwidth and potentially requiring external proxies for fan-out beyond a single shard. Redis 7.0 introduced sharded pub/sub to mitigate this by limiting message propagation within specific shards, improving scalability for partitioned workloads. Compared to dedicated message brokers like , Redis pub/sub is simpler and faster for low-latency, non-durable broadcasting but lacks features such as message , advanced , partitioning, and exactly-once semantics, making it less suitable for high-volume, fault-tolerant streaming pipelines.

Persistence Options

RDB Snapshotting

RDB snapshotting is Redis's mechanism for creating point-in-time backups of the in-memory , stored as compact files with the .rdb extension. This serializes the entire into a single file, capturing the state of all databases at a specific moment. By default, RDB files employ LZF to reduce storage size, which can be enabled or disabled via the rdbcompression directive. The snapshot is generated using a fork-based approach, where Redis spawns a to perform the disk write operation, allowing the to continue handling client requests without interruption. Snapshots can be initiated manually through the SAVE command, which blocks the until completion, or the BGSAVE command, which runs asynchronously in the background via the mechanism. Automatic triggers are defined in the redis.conf using the save directive, specifying intervals and change thresholds; for example, save 60 1000 initiates a snapshot if at least 1000 keys are modified within 60 seconds. Additional options include setting the output name with dbfilename (default: dump.rdb), the with dir, and enabling checksums for verification via rdbchecksum. While RDB does not use explicit fsync policies like AOF, the ensures data is flushed to disk during saving. This persistence option offers advantages such as rapid server restarts, as loading an RDB file repopulates the efficiently, and compact files ideal for archiving or . However, it risks losing data modifications that occurred after the last in case of a crash or power failure. In replication scenarios, RDB files play a key role during the initial full , where the master generates and streams an RDB to replicas to establish an exact copy of the before applying incremental updates.

Append-Only File (AOF)

The Append-Only File (AOF) in Redis records every write performed on the in a dedicated log file, appending commands in a human-readable format as they occur. Upon server restart or crash recovery, Redis replays these commands sequentially to rebuild the in-memory , ensuring from the last consistent state. This mechanism prioritizes durability by capturing incremental changes rather than full snapshots, making it suitable for applications requiring minimal . Since Redis 7.0, AOF uses a multi-part approach with a base file and incremental delta files to manage growth more efficiently. To prevent indefinite growth of the AOF file, Redis implements a background rewrite process triggered automatically when the file size exceeds a configurable percentage (default 100%) of its post-rewrite size, or manually via the BGREWRITEAOF command. The rewrite forks a that loads the current into and regenerates the AOF by executing operations on a , producing a compact version that omits redundant or superseded commands while preserving . This optimization reduces disk usage without interrupting normal operations. Redis configures AOF durability through the appendfsync policy, offering three options: no defers syncing to the operating system (potentially losing more data on crash but maximizing ), everysec (default) performs an fsync every second to limit maximum to one second's worth of writes, and always fsyncs after every write for strict at the expense of throughput. The everysec strikes a balance, as it avoids the of per-operation fsyncs while providing near-real-time . Compared to RDB snapshotting, AOF excels in crash by enabling finer-grained control and reducing potential to the fsync interval, though it generates larger files over time and prolongs restart durations due to command replay. Introduced in Redis 4.0, hybrid persistence integrates AOF with RDB by embedding a compact RDB as the of the rewritten AOF file, followed by only recent changes in AOF format; it is enabled by since Redis 7.0. This approach accelerates —loading the RDB quickly before replaying minimal AOF commands—while combining RDB's for data with AOF's incremental .

High Availability Mechanisms

Asynchronous Replication

Redis employs asynchronous replication as its primary mechanism for and read scaling, where a instance propagates changes to one or more instances without waiting for acknowledgments. This approach ensures low latency for write operations on the , as the acknowledges client requests immediately after processing them, while s asynchronously apply the changes, potentially introducing a slight lag. Unlike synchronous replication, Redis does not provide built-in support for waiting on confirmations, prioritizing over strict guarantees. To set up replication, administrators use the REPLICAOF command (or the deprecated SLAVEOF in older versions) on a instance, specifying the master's and , which initiates the replication stream. Upon connection, if the lacks the full or the master's replication has changed, a full occurs: the master generates an RDB of its current data and transmits it to the , which loads it into memory before resuming incremental updates. Subsequent changes are then sent incrementally as a stream of commands from the master's replication log, allowing replicas to replay operations in sequence. Replicas maintain a replication offset to track progress, and they automatically reconnect to the master if the link breaks, requesting either a partial or full resync as needed. For efficiency in handling temporary disconnections, Redis supports partial resynchronization via the PSYNC command, introduced to minimize usage compared to full resyncs. Each master maintains a replication ID—a for its dataset version—and a buffer of recent commands with their offsets. When a reconnects, it sends its last known replication ID and offset; if they match the master's current state and the offset is within the , the master streams only the missing commands from that offset, enabling quick recovery without reloading the entire dataset. If partial resync is impossible—due to a replication ID mismatch or exhausted —the falls back to a full resync. This feature, enhanced in Redis 2.8 and later, significantly reduces resync overhead in unstable networks. Asynchronous replication is commonly used to offload read operations from the to , improving throughput for read-heavy workloads, and to provide for manual in case of master failure. are read-only by default, ensuring that write remains centralized on the while distributing query load. However, this model results in , where may temporarily diverge from the during high write volumes or network issues, and it lacks automatic capabilities, requiring external intervention to promote a upon master outage.

Sentinel Monitoring

Redis Sentinel is a distributed system comprising multiple independent processes, known as Sentinels, which are specialized Redis instances running in mode to monitor Redis master instances and their associated replicas for . These Sentinels collaborate to detect failures and orchestrate failovers without requiring a dedicated coordination service, relying instead on periodic pings and protocols among themselves. For robustness, deployments typically include an odd number of Sentinels—at least three—distributed across distinct physical or virtual machines to mitigate single points of failure. The failover process begins when a Sentinel fails to receive a response from the within the configured down-after-milliseconds , triggering a local Subjectively Down (SDOWN) state for that Sentinel. If a configurable of Sentinels independently reach SDOWN and communicate this via their , the enters an Objectively Down (ODOWN) state, confirming the failure cluster-wide. At this point, the Sentinels elect a leader among themselves using a vote; the leader then selects the most suitable —based on replication and —promotes it to by issuing the REPLICAOF NO ONE command, reconfigures surviving replicas to replicate from the new , and generates configuration updates. The entire process aims to complete within the failover-timeout to avoid unnecessary retries. Configuration of Sentinel instances occurs primarily through the sentinel.conf file or runtime commands, defining monitoring targets and behavioral parameters. A core directive is sentinel monitor <master-name> <ip> <port> <quorum>, which registers a master for surveillance, with the quorum specifying the minimum number of agreeing Sentinels required for ODOWN and leader election—often set to 2 for three-Sentinel setups. Other key parameters include sentinel down-after-milliseconds <master-name> <milliseconds> (default 30 seconds, tunable to 60 seconds or more for noisy networks), sentinel failover-timeout <master-name> <milliseconds> (governing retry intervals and maximum failover duration), and sentinel parallel-syncs <master-name> <num-replicas> (controlling how many replicas resync simultaneously post-failover to balance load). Authentication and additional options, such as sentinel auth-pass, ensure secure communication in protected environments. Client applications integrate with Sentinel for service discovery and failover transparency by connecting to one or more Sentinel instances and querying the current master's address using the Sentinel API, such as the SENTINEL get-master-addr-by-name <master-name> command, which returns the IP and port of the active master. Upon failover, Sentinels publish notifications via Pub/Sub channels (e.g., +switch-master), allowing subscribed clients to redirect connections automatically; alternatively, clients periodically re-query Sentinels to detect changes. This protocol is supported by many Redis client libraries, including redis-py, Jedis, and , though compatibility should be verified for seamless handling of disconnections and reconnections. Despite its effectiveness, has limitations, particularly in handling network partitions ( scenarios) where Sentinels may be divided into majority and minority groups; the minority cannot trigger , potentially leading to prolonged outages if the is isolated with them, though configuration propagation ensures once partitions heal. It is optimized for small to medium-scale deployments overseeing a single master-replica topology and does not support sharding or multi-master setups, making Redis Cluster preferable for larger, distributed environments.

Scalability Features

Clustering and Sharding

Redis Cluster, introduced in Redis 3.0, enables horizontal scaling by distributing data across multiple nodes in a decentralized manner, with no as there is no central coordinating node. The partitions the keyspace into 16,384 hash slots, which are assigned to nodes, allowing the system to scale out while maintaining availability through a for node communication and configuration propagation. Nodes in Redis Cluster assume roles as masters or replicas to manage data distribution and . Master nodes own specific slots and handle read and write operations for keys mapping to those slots, while replicas asynchronously replicate data from their assigned masters to provide capabilities. In the event of a master failure, replicas detect the issue via the cluster's failure detection mechanism and perform automatic , promoting one replica to master while updating the cluster configuration. Sharding in Redis Cluster is implemented , where clients compute the hash slot for a key using the formula CRC16(key) mod 16384 and direct commands to the appropriate . If a client sends a command to the incorrect —due to cluster changes or initial misrouting—the responds with a MOVED error for permanent slot ownership changes or an ASK error for temporary redirects during resharding, prompting the client to retry on the correct . Resharding facilitates live migration of hash slots between nodes to balance load or scale the cluster, performed without interrupting ongoing operations. The process uses the redis-cli utility to identify slots for migration, mark them as importing or migrating, transfer keys atomically via commands like MIGRATE, and update slot ownership with CLUSTER SETSLOT, ensuring minimal disruption as the cluster remains operational throughout. Redis Cluster imposes limitations on operations spanning multiple keys to maintain consistency and performance in a distributed environment. Multi-key commands, transactions via MULTI/EXEC, and Lua scripts are supported only if all involved keys hash to the same slot, often enforced using hash tags like {user1000} to group related keys. Cross-slot transactions are not allowed, preventing atomicity across different masters and requiring application-level handling for such cases.

Hash Slot Distribution

In Redis Cluster, data partitioning is achieved through a fixed set of 16384 hash slots, which divide the keyspace into discrete segments for distribution across nodes. To determine the hash slot for a given key, the system computes the CRC16 hash of the key and takes the result modulo 16384, ensuring a consistent and deterministic mapping that supports efficient sharding. This mechanism allows keys to be assigned to specific slots regardless of the underlying node topology, with hash tags enabling multiple related keys (e.g., those sharing a common substring within curly braces) to map to the same slot for multi-key operations. Hash slots are owned exclusively by in the , where each is responsible for serving a subset of the total 16384 slots, typically distributed to balance load and ensure . Clients calculate the for a key independently and redirect commands to the appropriate owning that , using topology information obtained via commands like CLUSTER NODES or CLUSTER SLOTS; this client-side minimizes coordination overhead while maintaining . Ownership can be explicitly assigned during initialization or adjusted dynamically, with masters advertising their ranges to facilitate accurate client . Management of hash slots involves dedicated cluster commands for assignment and migration. The CLUSTER ADDSLOTS command assigns specific hash slots to the executing node, commonly used to initialize master nodes by apportioning the 16384 slots among them during cluster creation. For migrating slots between nodes—such as during resharding—the CLUSTER SETSLOT command designates a slot as belonging to a target node (using the IMPORTING state) or as migrating (using the MIGRATING state), allowing keys within the slot to be transferred atomically while preserving cluster consistency. Related commands like CLUSTER DELSLOTS remove slot ownership from a node, supporting cleanup after failures or decommissioning. Rebalancing ensures even of slots across masters, particularly after adding or removing , to prevent hotspots and optimize . The tool provides a --cluster rebalance option that automates this process by identifying under- or over-loaded masters and migrating slots accordingly, using thresholds for slot movement (e.g., aiming for no more than a specified number of slots per ) while minimizing disruptions. This command interacts with the cluster bus to execute migrations via CLUSTER SETSLOT, allowing administrators to specify parameters like the weight of each for proportional . Cluster consistency for slot ownership and state is maintained through a among , where each periodically exchanges messages ( and ) containing cluster configuration details, including assignments and epochs. This decentralized detects failures, elects new masters if needed, and converges the view without a central coordinator, ensuring all agree on ownership within a bounded time. The protocol's use of epochs and versioned messages prevents stale information from overriding valid updates during ongoing reconfigurations.

Performance Characteristics

Benchmarking and Throughput

Redis performance is commonly evaluated using tools like the built-in redis-benchmark or memtier_benchmark, which simulate concurrent client connections executing commands such as SET and GET. For a 1:10 SET/GET ratio, memtier_benchmark has measured throughput around 90,000 operations per second on from 2018; on modern , official benchmarks with redis-benchmark show higher rates exceeding 180,000 operations per second for mixed simple key-value operations without pipelining. These benchmarks highlight Redis's efficiency for in-memory caching workloads, though results vary based on and command . Latency for in-memory operations remains a key strength, with the 99th (P99) typically under 1 under normal conditions. This low tail supports applications, but it can be influenced by factors such as limitations, which directly affect round-trip times, and CPU contention, which may arise during intensive processing. Starting with Redis 6, multi-threaded I/O threading improves throughput in I/O-bound scenarios by leveraging multiple cores for socket handling, with gains varying by hardware and workload, without altering the single-threaded core execution model. In clustered deployments, however, this scaling introduces overhead from inter-node communication and slot management, reducing effective throughput compared to standalone instances for distributed workloads. When compared to , Redis demonstrates superior performance for operations involving complex data structures like lists, sets, and hashes, achieving lower latencies in benchmarks due to its native support for these types. Enabling disk options, such as RDB snapshots or AOF logging, introduces I/O bottlenecks that can significantly reduce throughput, depending on disk speed, write frequency, and configuration. As of 2025, Redis 8.0 introduces optimizations that enhance query performance, with benchmarks showing up to 87% faster command execution overall and sustained rates of 160,000 per second using refined indexing techniques. For large-scale searches, these improvements enable latencies as low as 200 milliseconds at 90% over 1 billion .

Optimization Strategies

Redis optimization strategies encompass a range of techniques to enhance performance, reduce latency, and improve resource efficiency in production environments. These approaches focus on tuning configuration parameters, leveraging client-side features, and applying system-level adjustments to address common bottlenecks in memory usage, network interactions, and input/output operations. Memory management is a critical aspect of Redis optimization, particularly for workloads involving small aggregate data types such as hashes, lists, and sets. Redis employs special encodings like ziplists to store these structures compactly when they meet certain size thresholds, significantly reducing memory footprint compared to standard dictionary or skiplist implementations. For instance, the hash-max-ziplist-entries configuration parameter sets the maximum number of entries in a hash before it switches from ziplist to hashtable encoding, with a default value of 512; lowering this value promotes denser storage for small hashes, while hash-max-ziplist-value limits field lengths to 64 bytes by default to maintain efficiency. Monitoring memory usage can be achieved through the INFO command, which provides metrics such as used_memory (total bytes allocated by Redis) and used_memory_human (human-readable format), allowing administrators to track fragmentation and peak usage over time. Eviction policies, such as LRU or LFU, complement these settings by managing memory pressure when limits are reached. To minimize network round-trip times (RTT) in high-throughput scenarios, client-side pipelining enables sending multiple commands in a single batch without awaiting individual responses, thereby amortizing costs across operations. This technique is particularly effective for write-heavy or sequential read workloads, as it can improve effective throughput by factors proportional to the number of batched commands. For operations requiring atomicity, Redis supports transactions via the MULTI and EXEC commands, which queue commands for execution as a single unit, ensuring consistency while benefiting from similar batching efficiencies. Input/output tuning plays a key role in scaling Redis on multi-core systems, especially for handling concurrent connections. Starting with Redis 6.0, multi-threaded I/O is enabled via the io-threads configuration (default 0, recommended up to the number of CPU cores minus one), which offloads proxy (reading client requests) and accept (handling new connections) tasks to background threads, leaving the main thread for command execution; this can yield substantial throughput gains on modern hardware without altering the single-threaded core model. Additionally, disabling Transparent Huge Pages (THP) in the mitigates latency spikes caused by memory allocation delays, achieved by running echo never > /sys/kernel/mm/transparent_hugepage/enabled and adding it to startup scripts, as THP consolidation interferes with Redis's small, frequent allocations. Effective is essential for identifying and resolving issues proactively. The redis-cli [MONITOR](/page/Monitor) command streams all processed commands in , aiding in slow queries or unexpected patterns, though it should be used sparingly in due to its overhead. For deeper analysis, the LATENCY DOCTOR command generates a human-readable report on potential issues like slow command execution or delays, offering remediation suggestions based on logged events. Security hardening through configuration restrictions enhances operational reliability by preventing accidental or malicious disruptions. In production setups, dangerous commands such as FLUSHALL or CONFIG can be disabled by renaming them to empty strings in the redis.conf file (e.g., rename-command FLUSHALL ""), or more granularly via Lists (ACLs) using the @dangerous category to exclude high-risk operations from user permissions, thereby safeguarding without impacting core functionality.

Extensions and Integrations

Module System

The Redis Module System provides a mechanism for extending the core functionality of Redis through dynamically loadable libraries, allowing developers to add custom commands, data types, and other features without modifying the Redis or forking the project. Introduced in Redis 4.0, this system uses a C-based (API) exposed via the redismodule.h header , which modules must include to interact with Redis internals. Modules are compiled as shared object files (.so on systems) and can be loaded either at server startup—specified in the redis.conf using the loadmodule directive or as a command-line argument—or at runtime via the MODULE LOAD command, which accepts the absolute path to the library . Key capabilities of the Module API include registering custom commands with RedisModule_CreateCommand, which enables modules to define new Redis commands that clients can invoke as if they were native; creating entirely new data types through the native types API, where modules implement methods for key manipulation, , and ; and hooking into events such as keyspace notifications or server lifecycle events via subscription functions like RedisModule_SubscribeToKeyspaceEvents. These features allow extensions like custom analytics processing without altering the core Redis codebase, as modules operate within the existing and replication mechanisms. For instance, a module might register a command to perform aggregate computations on stored data in real time. The API also supports blocking operations for long-running tasks and automatic to prevent leaks during command execution. Development of modules is facilitated by the RedisModules SDK, an open-source toolkit that provides a project template, Makefile for compilation, and a utility library to handle common tasks not covered by the core , such as simplified string handling or error reporting. This SDK streamlines integration by offering and , enabling developers to focus on module logic; for example, it includes a sample module implementing a hybrid HGETSET command that atomically retrieves and updates hash fields. Modules initialize via RedisModule_Init, specifying a name and version, and must adhere to the API version (currently 1) for compatibility. Modules are managed through administrative commands: MODULE LIST returns a list of currently loaded with their properties, such as name, version, and version; MODULE UNLOAD removes a by name, provided it has no open keys or active connections. Persistence for module-specific data is supported when using custom data types, where modules define rdb_save and rdb_load methods to serialize and deserialize structures during RDB snapshotting or AOF rewriting, ensuring data durability across restarts. However, standard Redis applies only to keys holding module data types, not to module code itself. A notable limitation is that modules execute within the same process space as the Redis , sharing and threads, which means a fault or crash in code—such as invalid access during concurrent operations—can bring down the entire instance. For example, mishandling a reference while calling internal Redis functions like DEL may trigger a . Additionally, the is strictly C-based and does not support embedding scripts within modules, as scripting is handled separately through the command; modules must implement logic natively to avoid compatibility issues. Developers are advised to use safe APIs like auto-memory allocation and key-opening flags to mitigate risks.

AI and Vector Database Capabilities

Redis introduced native support for vector databases starting with version 7.0, enabling the storage and querying of high-dimensional vector embeddings through the RediSearch module. This capability leverages Hierarchical Navigable Small World (HNSW) indexes to perform approximate nearest neighbor (ANN) searches efficiently, allowing developers to handle semantic similarity tasks at scale without compromising Redis's core in-memory performance. Redis 8.0 further enhanced this by introducing Vector Sets, a native data type that enables fast vector similarity search using HNSW, complementing the module-based vector capabilities. Starting with Redis 8.0, the core Redis Open Source distribution integrates what was previously Redis Stack, enhancing these AI workloads with built-in support for hybrid full-text and vector queries via RediSearch alongside RedisJSON for storing and manipulating JSON documents that contain embeddings. This combination supports complex indexing schemas where vector fields coexist with textual, numeric, or tag attributes, facilitating enriched semantic searches over structured data. For instance, applications can index document metadata in JSON while performing vector similarity on embedded representations derived from models like or embeddings. Key operations for vector similarity include cosine similarity, Euclidean (L2) distance, and inner product (IP), accessible via commands like FT.SEARCH with vector-specific parameters to retrieve top-k nearest neighbors or range queries. These metrics enable flexible matching based on application needs, such as normalized angle-based similarity for text embeddings (cosine) or magnitude-aware comparisons (IP). Redis also integrates seamlessly with frameworks, notably , where it serves as a vector store for embedding persistence, similarity retrieval, and chaining with LLMs in pipelines. In 2025, Redis 8.0 and subsequent 8.2 releases introduced significant optimizations for operations, including multi-threaded query processing in the Redis Query for up to 16x higher throughput and indexing at scales of billions of . Further advancements via —supporting quantization to 8-bit or 4-bit integers and —delivered 144% faster search speeds while reducing memory usage by up to 37%, thereby lowering storage costs without substantial accuracy loss. These improvements position Redis as a high-performance backend for agents and applications. A prominent is retrieval-augmented (RAG), where Redis performs over vector embeddings to retrieve relevant context from knowledge bases, enhancing LLM outputs with factual grounding. By combining HNSW-based ANN with filters on , RAG pipelines achieve low-latency retrieval, often under 10ms, supporting dynamic, context-aware responses in production environments.

Licensing and Community

License Evolution

Redis was initially released in 2009 under the three-clause BSD license, a permissive that allowed broad commercial use, modification, and distribution without requirements. This license remained in place for all versions through 2023, enabling widespread adoption by developers and companies, including cloud providers who built atop Redis without obligations to share derivative works. In March 2024, Redis Inc. announced a significant shift, applying a dual source-available licensing model to versions 7.4 and later: the Redis Source Available License version 2 (RSALv2) and the version 1 (SSPLv1). These licenses restrict the use of Redis for offering it as a managed service unless the service provider makes their entire offering's available under the same terms, aiming to prevent hyperscale providers from profiting extensively from Redis without contributing back to its development. Redis Inc. stated that this change addressed the imbalance where open-source projects like Redis were exploited by large entities building billion-dollar businesses on labor without reciprocity, while older versions prior to 7.4 continued to be available under the original BSD license. The transition did not retroactively affect existing deployments but applied to new features and updates, prompting discussions on the implications for open-source principles. By May 2025, with the release of Redis 8, the company evolved its approach further by adding the GNU Affero General Public License version 3 (AGPLv3) as an additional licensing option specifically for the Redis branch, making it OSI-approved and fully once more. This tri-licensing model—AGPLv3 for open-source use, alongside RSALv2 and SSPLv1 for other scenarios—allows developers to access and contribute to core functionality under terms that require sharing modifications when used in s, while preserving restrictions on commercial offerings. Redis Inc. positioned this as continued support for the open-source , enabling free use for non-hyperscale applications and integrating new features like vector sets into the open-source edition, though the AGPLv3's requirements differ from the original BSD's permissiveness.

Forks and Ecosystem

In March 2024, Valkey was launched as an open-source of Redis 7.2.4 by major contributors including AWS, Google Cloud, , , and , serving as a BSD 3-Clause licensed alternative following Redis's shift to a source-available license. This aims to preserve the original open-source of Redis while supporting high-performance key-value workloads, and it has been adopted as a in various cloud services. Valkey's is overseen by the , fostering collaborative development through a neutral, community-driven model with maintainers from industry partners. Redis Enterprise, developed by Redis Inc., extends the core Redis functionality into a commercial platform with features like active-active geo-distribution, which uses conflict-free replicated types (CRDTs) for multi-region replication and —previously termed CRDB. This setup enables low-latency access across distributed clusters with strong , alongside linear scalability through sharding and replication. Redis Enterprise operates under a dual-licensing model, combining source-available terms for proprietary extensions with options for open-source integration. Redis Inc. continues to maintain the official Redis repository, driving core development and releases. The broader Redis ecosystem thrives on community-contributed tools, particularly client libraries that enable integration across programming languages. Jedis serves as a performant, synchronous Java client, supporting Redis commands and cluster topologies for enterprise applications. Similarly, ioredis provides a full-featured Node.js client with optimizations for clustering, Sentinel, pub/sub, and pipelining, ensuring compatibility with Redis versions from 2.6 onward. Deployment tools, such as the Redis Operator for Kubernetes, simplify managing Redis clusters in containerized environments by automating scaling, backups, and high availability. By 2025, the Redis ecosystem has expanded notably in capabilities, with new modules and services like LangCache introducing semantic caching to optimize outputs and agentic systems through real-time search and . This growth aligns with acquisitions such as Decodable, enhancing real-time data streaming for workloads. Redis creator Salvatore Sanfilippo (antirez) rejoined the project in December 2024, contributing to open-source initiatives that culminated in relicensing the core under AGPLv3, restoring full open-source accessibility while introducing innovations like vector sets for data structures.

Applications and Adoption

Primary Use Cases

Redis serves as a high-performance caching layer in many software architectures, where it stores frequently accessed data to accelerate queries from primary databases such as or . By leveraging time-to-live () expiration policies, Redis ensures that hot data—such as user profiles or responses—remains fresh while evicting stale entries automatically, thereby reducing database load and achieving sub-millisecond response times for read-heavy workloads. In web applications, Redis is widely used for session management, enabling scalable storage of user sessions across distributed servers. Sessions, which include login credentials, preferences, and contents, are stored as key-value pairs or hashes, allowing stateless horizontal scaling without session affinity requirements on load balancers. This approach supports high concurrency by providing operations for session updates and retrievals. For real-time analytics, Redis excels in scenarios requiring immediate and ranking, such as leaderboards implemented with sorted sets or counters using atomic increment operations like INCR. Sorted sets maintain ordered collections of scores, enabling efficient range queries to display top performers, while INCR supports thread-safe counting for metrics like page views or user actions, facilitating instant insights in applications like gaming or social platforms. Redis also functions as a lightweight message queuing system, utilizing lists for simple task queues, streams for durable message persistence with consumer groups, and the publish-subscribe (pub/sub) model for real-time notifications. Lists and handle task offloading in , ensuring reliable delivery of jobs like sending, while pub/sub broadcasts events to multiple subscribers for decoupled communication. Emerging use cases for Redis include databases, where it stores embeddings for similarity searches in recommendation systems and detection. In recommendations, vector similarity matches preferences to items for personalized suggestions, while in detection, it analyzes behavioral patterns in to identify anomalies by comparing vectors against known profiles.

Notable Deployments and Popularity

Redis maintains a prominent position in database popularity rankings, securing the #7 spot in the as of November 2025, based on metrics including mentions in technical discussions, job postings, and queries. This ranking highlights its enduring appeal as a versatile in-memory , with a score reflecting steady growth amid competition from relational and alternatives. Surveys illustrate Redis's enterprise penetration, with widespread adoption among companies for caching, session management, and real-time analytics, as reported in industry analyses. High-profile users exemplify its scalability: (now X) relies on Redis for timeline caching to deliver personalized feeds. Similarly, utilizes Redis for managing asynchronous job processing, while has leveraged it for real-time features such as vote counting and notifications (as of 2016) to support dynamic user interactions. , operated by , deploys Redis clusters for messaging and social features, demonstrating its capacity for massive-scale workloads. In 2025, Redis's popularity has surged in applications, particularly for vector search and semantic caching, earning it the #1 ranking for agent in the Developer Survey with responses from over 49,000 developers. The emergence of Valkey, a community-driven Redis , has gained traction among cloud providers; AWS, for instance, integrated Valkey support into in October 2024 to offer open-source alternatives amid licensing shifts. The Redis community remains vibrant, with the core repository exceeding 60,000 stars and annual RedisConf events drawing thousands to explore advancements in processing. Amid licensing changes, forks like Valkey have seen increased adoption in cloud environments for maintaining open-source compatibility.

References

  1. [1]
    Frequently asked questions (FAQs) - Redis
    Redis (the database) is an in-memory data structure store, used as a database, cache, and message broker. It's available both as open source and with an ...
  2. [2]
    Redis Turns 10 How it started with a single post on Hacker News
    Feb 26, 2019 · In the comments, its creator Salvatore Sanfilippo (a.k.a. antirez) says “one of the major points of Redis is to support more complex types as ...
  3. [3]
    About - Redis
    Redis is the world’s fastest in-memory database, used for caching, vector search, and NoSQL databases, with data structures like strings, hashes, and lists.
  4. [4]
    Redis is now available under the AGPLv3 open source license
    May 1, 2025 · Redis is now available under the AGPLv3 open source license · Adding the OSI-approved AGPL as an additional licensing option for Redis, starting ...
  5. [5]
    What is Redis?: An Overview
    Feb 21, 2024 · Redis is an open-source, fast, in-memory NoSQL key/value data structure server, known for its speed and real-time data access.
  6. [6]
    Redis data types | Docs
    Redis provides a collection of native data types that help you solve a wide variety of problems, from caching to queuing to event processing.Redis sets · Hash · Redis lists · Redis Streams
  7. [7]
    Data Structures - Redis
    Redis sophisticated data structures enable you to develop applications with fewer lines of elegant code to store, access, and use your data.
  8. [8]
    Scripting with Lua | Docs - Redis
    Redis lets users upload and execute Lua scripts on the server. Scripts can employ programmatic control structures and use most of the commands while executing ...Missing: Dictionary | Show results with:Dictionary
  9. [9]
    Redis internals | Docs
    The following Redis documents were written by the creator of Redis, Salvatore Sanfilippo, early in the development of Redis (c. 2010).
  10. [10]
    Thank You, Salvatore Sanfilippo - Redis
    Jun 30, 2020 · After maintaining the open source Redis project for 11 years, Salvatore Sanfilippo (aka antirez) has decided to take a step back.
  11. [11]
    Redis 8 is now GA, loaded with new features and more than 30 ...
    May 1, 2025 · It has over 30 performance improvements, including up to 87% faster commands, up to 2x more operations per second throughput, up to 18% faster ...
  12. [12]
    Open Source | Docs - Redis
    It supports complex data types (for example, strings, hashes, lists, sets, sorted sets, and JSON), with atomic operations defined on those data types. You ...
  13. [13]
    Redis FAQ | Docs
    Redis is an acronym that stands for REmote DIctionary Server. Why did Salvatore Sanfilippo start the Redis project? Salvatore originally created Redis to scale ...
  14. [14]
    Caching | Redis
    With caching, data stored in slower databases can achieve sub-millisecond performance. That helps businesses to respond to the need for real-time ...
  15. [15]
    Redis benchmark | Docs
    ... epoll/kqueue, the Redis event loop is quite scalable. Redis has already been benchmarked at more than 60000 connections, and was still able to sustain 50000 ...How Fast Is Redis? · Using Pipelining · Pitfalls And Misconceptions
  16. [16]
    Event library | Docs - Redis
    An event library handles non-blocking operations, like socket I/O, using OS polling and timers. Redis uses its own event library implemented in ae.c.Event Loop Initialization · Aecreatefileevent · AeprocesseventsMissing: kqueue | Show results with:kqueue
  17. [17]
    13 Years Later Does Redis Need a New Architecture?
    Jun 28, 2022 · The Dragonfly benchmark compares a standalone single process Redis instance (that can only utilize a single core) with a multithreaded Dragonfly ...
  18. [18]
    Transactions | Docs - Redis
    Redis Transactions allow the execution of a group of commands in a single step, they are centered around the commands MULTI, EXEC, DISCARD and WATCH.Errors Inside A Transaction · Optimistic Locking Using... · Watch ExplainedMissing: threaded | Show results with:threaded
  19. [19]
    Redis Ziplist
    A ziplist is a compressed list representation that optimizes memory usage and provides efficient access to elements. It achieves this by using a sequential ...
  20. [20]
    Database Consistency Explained - Redis
    May 27, 2022 · For open source Redis, there is weak consistency, but Redis Enterprise's Active–Active Geo-Distribution provides strong eventual consistency.
  21. [21]
    A conversation with Salvatore Sanfilippo, creator of the open-source ...
    Jun 19, 2016 · Like other open-source software, Redis has many contributors. But it was started by one individual: an Italian named Salvatore Sanfilippo. Last ...Missing: origins | Show results with:origins
  22. [22]
    From Our Founders: Becoming One Redis
    Aug 11, 2021 · In 2015 we started sponsoring the project and Salvatore Sanfilippo, Redis's creator, joined Redis Labs as its lead open source maintainer.
  23. [23]
    Redis weekly update #7 - Full of keys - antirez weblog
    Aug 25, 2010 · I'm processing the open issues one after the other, and when I'll done, it will be time to release 2.0. ... Redis 2.0.0 is at this point ...
  24. [24]
    https://raw.githubusercontent.com/redis/redis/3.0/...
    This is the 5th release candidate of Redis 3.0.0, released in order to fix a moderate bug in Redis Cluster. This RC does not shift in the future the Redis 3.0.0 ...
  25. [25]
    Diving Into Redis 6.0
    Apr 30, 2020 · But if you want to dive in headfirst, you can try out RESP3 on Redis 6 now—just understand that RESP3 is still in an early phase of development.
  26. [26]
    Redis 6 RC1 is out today - <antirez>
    Dec 19, 2019 · Since RESP3 is not the only protocol supported I expect the adoption to be slower than expected, but maybe this is not a bad thing after all: we ...
  27. [27]
    Redis 6 (ACLs, Threaded I/O, the new Proxy & RESP3) - YouTube
    Jul 10, 2019 · Salvatore Sanfilippo, the creator of Redis, introduces Redis 6 at Redis Day New York 2019. He showcases all the new improvements and ...Missing: 6.0 | Show results with:6.0
  28. [28]
    Redis persistence | Docs
    Since Redis 7.0.0, Redis uses a multi part AOF mechanism. That is, the original single AOF file is split into base file (at most one) and incremental ...
  29. [29]
    Enhanced I/O multiplexing in ElastiCache for Redis 7 - Amazon AWS
    Feb 8, 2023 · ElastiCache for Redis 7 and above now includes enhanced I/O multiplexing, which delivers significant improvements to throughput and latency at scale.
  30. [30]
    Redis Open Source 8.0 release notes | Docs
    This is the General Availability release of Redis Open Source 8.0. Redis 8.0 deprecates previous Redis and Redis Stack versions.
  31. [31]
  32. [32]
    Redis Labs Becomes, Simply, Redis
    Aug 11, 2021 · The company began contributing to the open source project shortly after its founding in 2011. In 2015 Sanfilippo joined the company when it ...
  33. [33]
    The end of the Redis adventure - <antirez>
    Jul 1, 2020 · Problem? Let's figure out a solution. We wanted to solve problems but we wanted, even more, to have fun. This was the playful environment where ...
  34. [34]
    Redis Strings | Docs
    Redis strings store sequences of bytes, including text, serialized objects, and binary arrays. As such, strings are the simplest type of value you can associate ...
  35. [35]
    Redis lists | Docs
    Redis lists are linked lists of string values. Redis lists are frequently used to: Basic commands Blocking commands Lists support several blocking commands.
  36. [36]
    Redis sets | Docs
    A Redis set is an unordered collection of unique strings (members). You can use Redis sets to efficiently: Basic commands See the complete list of set commands.
  37. [37]
    Redis sorted sets | Docs
    A Redis sorted set is a collection of unique strings (members) ordered by an associated score. When more than one string has the same score, the strings are ...
  38. [38]
    Redis hashes | Docs
    Redis hashes are record types structured as collections of field-value pairs. You can use hashes to represent basic objects and to store groupings of counters, ...
  39. [39]
    Redis bitmaps | Docs
    Bitmaps are not an actual data type, but a set of bit-oriented operations defined on the String type which is treated like a bit vector.Missing: hyperloglog | Show results with:hyperloglog
  40. [40]
    HyperLogLog | Docs - Redis
    HyperLogLog is a probabilistic data structure that estimates the cardinality of a set. As a probabilistic data structure, HyperLogLog trades perfect accuracy.Use Cases · Basic Commands · Performance
  41. [41]
    Redis geospatial | Docs
    Redis geospatial indexes let you store coordinates and search for them. This data structure is useful for finding nearby points within a given radius or ...Missing: hyperloglog | Show results with:hyperloglog
  42. [42]
    Keys and values | Docs - Redis
    Every data object that you store in a Redis database has its own unique key. The key is a string that you pass to Redis commands to retrieve the corresponding ...
  43. [43]
    redis.conf - Download links
    # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a ...
  44. [44]
    SELECT | Docs - Redis
    When using Redis Cluster, the SELECT command cannot be used, since Redis Cluster only supports database zero.
  45. [45]
    Redis cluster specification | Docs
    Welcome to the Redis Cluster Specification. Here you'll find information about the algorithms and design rationales of Redis Cluster.Client and Server roles in the... · Overview of Redis Cluster... · Fault Tolerance
  46. [46]
    Redis Anti-Patterns Every Developer Should Avoid
    Aug 29, 2024 · With a large number of clients, a reconnect flood will be able to simply overwhelm a single threaded Redis process and force a failover.1. Large Databases Running... · 4. Performing Single... · 7. Hot Keys
  47. [47]
    EXPIRE | Docs - Redis
    The EXPIRE command supports a set of options: NX -- Set expiry only when the key has no expiry; XX -- Set expiry only when the key has an existing ...Expireat · Expiretime · Pexpire · TTL
  48. [48]
    What Are the Impacts of the Redis Expiration Algorithm?
    The mechanism is simple: every key that at some point needs to be expired has a Time-To-Live (TTL) associated, and the database needs to detect those keys that ...
  49. [49]
    Key eviction | Docs - Redis
    volatile-ttl : Evict keys with the expire field set to true that have the shortest remaining time-to-live (TTL) value. The volatile-xxx policies behave like ...
  50. [50]
    Do replicas apply eviction policies if maxmemory is set? - Redis
    By default, a replica will ignore maxmemory (unless it is promoted to master after a failover or manually). It means that the eviction of keys will be handled ...<|separator|>
  51. [51]
    SET | Docs - Redis
    Starting with Redis version 6.0.0: Added the KEEPTTL option. Starting with Redis version 6.2.0: Added the GET , EXAT and PXAT ...Setex · Setnx · MSET
  52. [52]
    GET | Docs - Redis
    GET key Get the value of key. If the key does not exist the special value nil is returned. An error is returned if the value stored at key is not a string.Getdel · MGET · Getex · Getset
  53. [53]
    HSET | Docs - Redis
    Sets the specified fields to their respective values in the hash stored at key. This command overwrites the values of specified fields that exist in the hash.Hsetex · Hmset · HDEL
  54. [54]
    HGET | Docs - Redis
    Returns the value of a field in a hash.
  55. [55]
    LPUSH | Docs - Redis
    So for instance the command LPUSH mylist a b c will result into a list containing c as first element, b as second element and a as third element. Examples.
  56. [56]
    LPOP | Docs - Redis
    Removes and returns the first elements of the list stored at key. By default, the command pops a single element from the beginning of the list.
  57. [57]
    ZRANGE | Docs - Redis
    ZRANGE can perform different types of range queries: by index (rank), by the score, or by lexicographical order. Starting with Redis 6.2.0, this command can ...
  58. [58]
    SMEMBERS | Docs - Redis
    SMEMBERS key Returns all the members of the set value stored at key. This has the same effect as running SINTER with one argument key.Missing: patterns | Show results with:patterns
  59. [59]
    GEOSEARCH | Docs - Redis
    This command extends the GEORADIUS command, so in addition to searching within circular areas, it supports searching within rectangular areas. This command ...
  60. [60]
    MULTI | Docs - Redis
    MULTI marks the start of a transaction block. Subsequent commands will be queued for atomic execution using EXEC.<|control11|><|separator|>
  61. [61]
    EXEC | Docs - Redis
    Executes all previously queued commands in a transaction and restores the connection state to normal. When using WATCH , EXEC will execute commands only if the ...
  62. [62]
    WATCH | Docs - Redis
    Monitors changes to keys to determine the execution of a transaction.
  63. [63]
    EVAL | Docs - Redis
    EVAL executes a server-side Lua script. The first argument is the script, and the second is the number of input key arguments. All accessed keys must be ...
  64. [64]
    Redis serialization protocol specification | Docs
    Redis clients should return the idiomatic dictionary type that their language provides. However, low-level programming languages (such as C, for example) ...Missing: Remote | Show results with:Remote
  65. [65]
    RESP compatibility with Redis Enterprise | Docs
    Redis Enterprise supports RESP2 (all versions) and RESP3 (7.2 and later). RESP2 is supported by all versions, and RESP3 is supported by 7.2 and later.Enable RESP3 for a database · Change default RESP3 option
  66. [66]
    Redis Pub/sub | Docs
    Redis' Pub/Sub exhibits at-most-once message delivery semantics. As the name suggests, it means that a message will be delivered once if at all.Publish · Subscribe · Psubscribe · Transactions
  67. [67]
    PUBSUB CHANNELS | Docs - Redis
    Lists the currently active channels. An active channel is a Pub/Sub channel with one or more subscribers (excluding clients subscribed to patterns).
  68. [68]
    Pub/Sub - Redis
    Redis provides a publish/subscribe (pub/sub) messaging system that allows clients to subscribe to channels and receive messages when messages are published to ...
  69. [69]
    Redis vs Kafka - Difference Between Pub/Sub Messaging Systems
    Overall, Apache Kafka outperforms Redis OSS in pub/sub messaging because Kafka was designed specifically for data streaming. Redis OSS has several different use ...Message handling: Kafka vs... · Performance differences...
  70. [70]
    SAVE | Docs - Redis
    The SAVE commands performs a synchronous save of the dataset producing a point in time snapshot of all the data inside the Redis instance, in the form of an RDB ...
  71. [71]
    BGSAVE | Docs - Redis
    BGSAVE [SCHEDULE] Save the DB in background. Normally the OK code is immediately returned. Redis forks, the parent continues to serve the clients.
  72. [72]
    Redis replication | Docs
    Redis expires allow keys to have a limited time to live (TTL). Such a feature depends on the ability of an instance to count the time, however Redis replicas ...<|separator|>
  73. [73]
    Durable Redis Persistence Storage | Redis Enterprise
    Redis Enterprise is a fully durable database that supports AOF (Append-Only File) data persistence and Snapshot data persistence mechanisms.Append-Only File Data... · Snapshot · Enhanced Storage Engine
  74. [74]
    3.1 Basic Replication - Redis
    Feb 29, 2024 · Replication in Redis follows a simple primary-replica model where the replication happens in one direction - from the primary to one or multiple replicas.
  75. [75]
    High availability with Redis Sentinel | Docs
    Redis Sentinel provides high availability by monitoring, enabling automatic failover when a master fails, and acting as a configuration provider.
  76. [76]
    3.3 Understanding Sentinels - Redis
    Mar 4, 2024 · Redis Sentinel is a distributed system consisting of multiple Redis instances started in sentinel mode. We call these instances Sentinels.
  77. [77]
    Example sentinel.conf - Redis
    So you need to configure all your Sentinels in a given # group with the same "requirepass" password. Check the following documentation # for more info: https:// ...
  78. [78]
    Sentinel client spec | Docs - Redis
    Redis Sentinel is a monitoring solution for Redis instances that handles automatic failover of Redis masters and service discovery.Redis service discovery via... · Sentinel failover disconnection · Connect to replicas
  79. [79]
    A few arguments about Redis Sentinel properties and fail scenarios.
    Oct 22, 2014 · 1) All the Sentinels will agree about the configuration as soon as they can communicate. Actually each sub-partition will always agree. 2) ...
  80. [80]
    Scale with Redis Cluster | Docs
    Generate an append only file for all of your N masters using the BGREWRITEAOF command, and waiting for the AOF file to be completely generated. Save your AOF ...
  81. [81]
    Redis Clustering Best Practices With Multiple Keys
    Jun 1, 2022 · Transactions in Redis only occur within the same hash slot, which ensures the highest throughput. Since there is no inter-node/shard ...
  82. [82]
    CLUSTER ADDSLOTS | Docs - Redis
    The `CLUSTER ADDSLOTS` command assigns hash slots to a node, modifying its cluster view. It's used to initially setup master nodes and fix broken clusters.
  83. [83]
    CLUSTER SETSLOT | Docs - Redis
    In this way when a node in migrating state generates an ASK redirection, the client contacts the target node, sends ASKING , and immediately after sends the ...Missing: sharding | Show results with:sharding
  84. [84]
    CLUSTER DELSLOTS | Docs - Redis
    The CLUSTER DELSLOTS command asks a particular Redis Cluster node to forget which master is serving the hash slots specified as arguments.
  85. [85]
    How to Benchmark the Performance of a Redis Server on Ubuntu ...
    Aug 16, 2019 · According to this run of memtier_benchmark , our Redis server can execute about 90 thousand operations per second in a 1:10 SET / GET ratio.Missing: single | Show results with:single
  86. [86]
    Redis on AmpereOne Workload Brief - Ampere Computing
    Jan 23, 2023 · 99 latency was at most 1ms. Next, the number of Redis instances was scaled up till one or more instances violated the p99 latency SLA. The ...Missing: operations | Show results with:operations
  87. [87]
    Comparing the new Redis6 multithreaded I/O to Elasticache & KeyDB
    Apr 14, 2020 · This blog compares single node performance of Elasticache, open source KeyDB, and open source Redis v5.9.103 (6.0 branch).Throughput Testing With... · Throughput Testing Using... · Latency Testing With Ycsb
  88. [88]
    Redis cluster performance scale and benchmark #4041 - GitHub
    Jun 8, 2017 · Redis Cluster must scale linearly, because there is no logic that can prevent it, so it is basically impossible for it to behave differently.
  89. [89]
    Redis vs Memcached: Which Caching System Delivers Faster?
    Aug 26, 2025 · However, in some benchmarking tests, Redis outperformed Memcached with lower latencies, particularly for complex data structures. Ultimately ...Data Structure Capabilities · Performance Metrics · Use Cases For Redis Vs...<|separator|>
  90. [90]
    High I/O redis server with persistence
    Jun 9, 2018 · In this post I will try to share our experience with scaling issue when using high I/O redis server with persistence.Long Version · Background · Issue
  91. [91]
    Searching 1 billion vectors with Redis 8
    Mar 7, 2025 · Redis 8 achieved 90% precision with 200ms median latency and 95% with 1.3s median latency for 1 billion vectors, scaling to billions while ...Missing: 8.0 speed
  92. [92]
    Redis modules API | Docs
    Redis modules are dynamic libraries that can be loaded into Redis at startup, or using the MODULE LOAD command. Redis exports a C API, in the form of a single ...API reference · Native types API · Blocking commands
  93. [93]
    MODULE LOAD | Docs - Redis
    The `MODULE LOAD` command loads a module from a dynamic library at runtime, using the absolute path of the library. Modules can also be loaded at server ...
  94. [94]
    Modules API reference | Docs - Redis
    This information is used by ACL, Cluster and the COMMAND command. NOTE: The scheme described above serves a limited purpose and can only be used to find keys ...<|separator|>
  95. [95]
    Modules API for native types | Docs - Redis
    This document describes the API exported by the Redis modules system in order to create new data structures and handle the serialization in RDB files.
  96. [96]
    redismodule.h - Download links
    ... modules ------------- */ #if defined REDISMODULE_CORE /* Things only defined ... event on key misses. */ #define REDISMODULE_OPEN_KEY_NONOTIFY (1<<17) ...
  97. [97]
    How To Build a Redis Module
    ... command: HGETSET <key> <element> <new value>. HGETSET is a combination of HGET and HSET that allows you to retrieve the current value in a HASH object and ...
  98. [98]
  99. [99]
    MODULE LIST | Docs - Redis
    List of loaded modules. Each element in the list represents a represents a module, and is in itself a list of property names and their values.
  100. [100]
    Redis 7.2 Sets New Experience Standards Across Redis Products
    Aug 15, 2023 · Redis' vector database supports two vector index types: FLAT (brute force search) and HNSW (approximate search), as well as three popular ...
  101. [101]
    About Redis Stack
    Redis Stack is made up of several components, licensed as follows: Redis Stack Server, which combines open source Redis with RediSearch, RedisJSON, RedisGraph, ...
  102. [102]
    Vector search concepts | Docs - Redis
    Redis vector search uses a high-performance database for semantic searches over vector embeddings, with index creation, storage, and search using KNN and range ...Missing: 2x | Show results with:2x
  103. [103]
    LangChain Redis Package: Smarter AI apps with advanced vector ...
    Sep 11, 2024 · The LangChain Redis package integrates Redis with LangChain, providing fast vector storage, advanced caching, and chat history for faster, ...
  104. [104]
    Whats new in two: September 2025 edition - Redis
    Oct 1, 2025 · Vector compression. Vector search in Redis now supports quantization of embeddings and dimensionality reduction through standard scalar ...
  105. [105]
    Tech dive: Comprehensive compression leveraging quantization ...
    Sep 24, 2025 · This joint effort has resulted in significant memory savings and performance improvements, advancing the state of vector search within Redis.
  106. [106]
    Linux Foundation Launches Open Source Valkey Community
    Mar 28, 2024 · ... (BSD) 3-clause license. Since the Redis project was founded in 2009, thousands of open source developers have contributed significantly to ...<|separator|>
  107. [107]
    Redis License is BSD and will remain BSD | Redis
    Aug 22, 2018 · We want to address your questions and be crystal clear: the license for open source Redis was never changed. It is BSD and will always remain BSD.Missing: 2009 | Show results with:2009
  108. [108]
    Redis Adopts Dual Source-Available Licensing
    Mar 20, 2024 · Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1).
  109. [109]
    Redis is no longer free software - LWN.net
    Mar 21, 2024 · Under the new license, cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge.
  110. [110]
    Linux Foundation Launches Valkey As A Redis Fork - Phoronix
    Mar 28, 2024 · Due to the Redis licensing changes, Valkey is forking from Redis 7.2.4 and will maintain a BSD 3-clause license. Google, AWS, Oracle, and others ...
  111. [111]
    What is Valkey? - Amazon AWS
    Valkey is an open source, in-memory, high performance, key-value datastore. It is a drop-in replacement for Redis OSS.
  112. [112]
    Active-Active geo-distributed Redis | Docs
    In Redis Enterprise, Active-Active geo-distribution is based on CRDT technology. The Redis Enterprise implementation of CRDT is called an Active-Active ...High Availability · Multi-Primary Replication · Strong Eventual ConsistencyMissing: infinite dual- licensed
  113. [113]
    Active-Active geo-distribution (CRDTS-based) - Redis
    An Active-Active architecture is a data resiliency architecture that distributes the database information over multiple data centers.Missing: passive | Show results with:passive<|separator|>
  114. [114]
    Welcome back to Redis, antirez
    Dec 10, 2024 · Salvatore built Redis in 2009, and the project until 2020 when he took a break to focus on his family and other projects. In this post, he ...
  115. [115]
    Jedis guide (Java) | Docs - Redis
    Jedis is a synchronous Java client for Redis. Use Lettuce if you need a more advanced Java client that also supports asynchronous and reactive connections.Lettuce (Java) · Connect · Index and query documents · Production usageMissing: Operator Kubernetes
  116. [116]
    redis/ioredis: A robust, performance-focused, and full-featured Redis ...
    A robust, performance-focused and full-featured Redis client for Node.js. Supports Redis >= 2.6.12. Completely compatible with Redis 7.x.Redis Node.js client · Issues · Pull requests 46 · ActionsMissing: Jedis Java Operator Kubernetes
  117. [117]
    Connect with Redis client API libraries | Docs
    Connect with Redis client API libraries. Connect your application to a Redis database and try an example. Use the Redis client libraries to connect to Redis ...Node-redis (JavaScript) · Redis-py (Python) · Go-redis (Go) · Client-side cachingMissing: Sentinel | Show results with:Sentinel
  118. [118]
    Introducing LangCache and vector sets, simple solutions for high ...
    Apr 8, 2025 · It's the fastest version of Redis yet, with more than thirty performance improvements, support for eight new data structures, 16X more ...<|control11|><|separator|>
  119. [119]
    The fast lane for your AI stack - Redis
    Sep 4, 2025 · Up to 35% faster commands versus Redis 8.0. That's 91% faster than Redis 7.2 · Up to 37% smaller memory footprint with up to 67% reduction with ...
  120. [120]
    Redis is open source again - <antirez>
    May 1, 2025 · P.S. Redis 8, the first version of Redis with the new license, is also GA today, with a many new features and speed improvements of the core ...Missing: departure 2020
  121. [121]
    How to use Redis for Query Caching
    Jan 31, 2025 · Redis is an in-memory datastore, best known for caching. Redis allows you to reduce the load on a primary database while speeding up database reads.
  122. [122]
    Session Store | Redis
    Apps commonly use session stores to track user identity, login credentials, personalization information, recent actions, shopping cart items, and more.
  123. [123]
    Session Management | Redis
    Consider a text chat application using MySQL as the relational database, Node.js as the backend server technology, and Redis Enterprise for session management.
  124. [124]
    Leaderboards | Redis
    Sorted Sets (ZSETs) within Redis are a built-in data structure that makes leaderboards simple to create and manipulate. Redis Enterprise is based on a ...
  125. [125]
    INCR | Docs - Redis
    Basically we have a counter for every IP, for every different second. But these counters are always incremented setting an expire of 10 seconds so that they ...Incrby · Expire · Hincrby · Ts.incrby
  126. [126]
    Messaging - Redis
    We bring it all together with streams, pub/sub support, and flexible data models for async communication, message queuing, and prioritization.
  127. [127]
    Redis Streams | Docs
    A Redis stream is a data structure that acts like an append-only log but also implements several operations to overcome some of the limits of a typical append- ...Xread · XADD · Geospatial · XreadgroupMissing: queuing | Show results with:queuing
  128. [128]
    Vector Databases 101 - Redis
    Feb 29, 2024 · The database then utilizes algorithms like Euclidean distance or cosine similarity to identify and rank the stored vectors based on their ...Missing: metrics | Show results with:metrics
  129. [129]
    popularity ranking of database management systems - DB-Engines
    The ranking is updated monthly. Read more about the method of calculating the scores. 426 systems in ranking, November 2025. Rank, DBMS, Database Model, Score.Trend chart · Graph DBMS · Relational DBMS · Time Series DBMS
  130. [130]
    Leading brands build with Redis
    TransNexus uses Redis microservices to reduce fraud detection times by 95% for real-time transactions. ... Groww uses Redis to scale real-time financial ...
  131. [131]
    How Twitter Uses Redis to Scale - 105TB RAM, 39MM QPS, 10000+ ...
    Sep 8, 2014 · Redis was first used within Twitter in 2010 for the Timeline service. ... Approximately 100 projects in Twitter use a cache cluster.
  132. [132]
    rq/rq: Simple job queues for Python - GitHub
    RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis or Valkey and is ...RQ · Issues 223 · Pull requests 15 · Discussions
  133. [133]
    Stack Overflow: The Architecture - 2016 Edition - Nick Craver
    Feb 17, 2016 · We use websockets to push real-time updates to users such as notifications in the top bar, vote counts, new nav counts, new answers and comments ...
  134. [134]
    TencentDB for Redis - Tencent Cloud
    TencentDB for Redis® monitors and visualizes performance metrics for QPS and most Redis commands such as set/get. You can configure alarms to receive email ...
  135. [135]
    Its official: Were the #1 AI agent data storage tool - Redis
    Aug 8, 2025 · According to the 2025 Stack Overflow Developer Survey, Redis was ranked the most-used data management tool for AI agentsbeating out GitHub ...Missing: adoption Fortune 500
  136. [136]
    Valkey
    Valkey is an open source (BSD) high-performance key/value datastore that supports a variety of workloads such as caching, message queues, and can act as a ...Try Valkey · Valkey Releases · Documentation · DownloadMissing: governance | Show results with:governance
  137. [137]
    redis/redis: For developers, who are building real-time data ... - GitHub
    Redis is the preferred, fastest, and most feature-rich cache, data structure server, and document and vector query engine.