Fact-checked by Grok 2 weeks ago

Rendezvous hashing

Rendezvous hashing, also known as highest random weight (HRW) hashing, is a technique designed to map objects to in distributed systems, enabling clients to independently agree on assignments without centralized coordination while minimizing data remapping when the set of available changes. It addresses the challenges of load balancing in dynamic environments by assigning each object to the that yields the highest "random weight" computed via a combining the object and identifiers. The core operates by evaluating a h(o, s) for an object o against each s in the current set, selecting the server with the maximum value as the assignee; this process ensures (each server receives approximately $1/|V| of the objects, where |V| is the number of servers) and monotonicity (objects assigned to existing servers remain there upon changes to the set). To enhance and handle probabilistic variations, multiple replicas of servers (typically O(\log n) per server) can be used, distributing objects more evenly and bounding the maximum load per server to O(\log n / n) of the total. A weighted variant, weighted HRW, extends the basic approach by incorporating server capacities through a scoring formula like \text{Score}(o_i, s_j) = -w_i / \log(\text{Hash}(o_i, s_j) / H_{\max}), where w_i is the server's weight and H_{\max} is the hash range maximum, allowing for proportional load distribution in heterogeneous environments while preserving minimal disruption—only assignments involving changed servers are affected. This variant maintains the original's efficiency, with computation time independent of the number of servers and suitable for real-time decisions. Rendezvous hashing has found widespread applications in networking and distributed systems, including (EVPN) designated forwarder elections per 8584, resilient load balancing over unequal-cost multipath links in data centers, and multicast designated router selection in (PIM). Its advantages over alternatives like hashing or ring-based include simpler implementation without virtual nodes, logarithmic-time selection via optimizations, and superior handling of node failures or additions with O(1/n) remapping probability per object.

Fundamentals

Definition and Motivation

Rendezvous hashing, also known as highest random weight (HRW) hashing, is a distributed that assigns keys to in a system by computing a value for each key- pair and selecting the yielding the highest score. This method ensures deterministic agreement among clients without requiring centralized coordination, as each participant independently evaluates the same to identify the preferred . The approach treats the hash outputs as pseudorandom weights, ranking accordingly to achieve a consistent and balanced assignment. The motivation for rendezvous hashing stems from the challenges of in dynamic distributed systems, such as web caching, networks, and load-balanced clusters, where s frequently join or leave, and centralized controllers are prone to single points of failure. Traditional methods like or simple modulo hashing often lead to hotspots, uneven load distribution, or significant remapping disruptions when the system topology changes, exacerbating latency and inefficiency. In contrast, rendezvous hashing provides across s while minimizing reassignments—typically affecting only a (about 1/N, where N is the number of s) of keys upon node additions or failures—thus supporting scalable load balancing and replication without hotspots or coordinator dependencies. A high-level example illustrates its application in web caching: to assign a cache object (identified by a key such as a URL) to one of several proxy servers, the system hashes the key concatenated with each server's identifier, then routes the object to the server with the maximum hash value, ensuring even load spread and fault tolerance as servers vary. This decentralized determinism contrasts with naive approaches by guaranteeing the same outcome across clients while adapting gracefully to environmental changes.

Basic Algorithm

The basic algorithm of Rendezvous hashing, also known as highest random weight (HRW) hashing, assigns a given key K to one from a set N = \{n_1, \dots, n_n\} by computing a pseudorandom weight for each key-node pair and selecting the node with the highest weight. This ensures that all clients using the same agree on the assignment without coordination. To compute the weight w(K, n_i), the algorithm applies a uniform hash function to the key and node identifier, typically by XORing or concatenating them and producing a numerical value. In the original formulation, a 31-bit digest D(k) of the key is first computed (e.g., via CRC-32, discarding the high bit), which is then XORed with the low-order 31 bits of the node address to form a temporary value; this temporary value is used as input to a pseudorandom function based on the BSD rand (e.g., via seeding), producing the weight modulo $2^{31}. In practice, a strong non-cryptographic hash such as MD5 or SHA-1 is applied to the concatenation K || n_i to generate a uniform 128-bit or 160-bit value, from which the highest numerical interpretation is selected. The procedure can be expressed in the following , which iterates over all s to find the maximum:
[function](/page/Function) assign_key(K, N):
    max_weight = -[infinity](/page/Infinity)
    selected_node = None
    for ni in N:
        weight = [hash](/page/Hash)(K || ni)  // or XOR-based as in original
        if weight > max_weight:
            max_weight = weight
            selected_node = ni
    return selected_node
This iterative approach requires evaluating the for each . The time complexity of a single lookup or assignment is O(n), where n is the number of nodes, due to the linear scan; this is negligible for small n but scales with cluster size. For illustration, consider a key K = "file123" and nodes N = \{"serverA", "serverB", "serverC"\}, using a simplified MD5 hash interpreted as an integer (in practice, the full 128-bit value is compared). The computed weights might be: \text{MD5("file123" || "serverA")} = 0x1a2b3c4d... (integer value ≈ 1.23 × 10^{38}), \text{MD5("file123" || "serverB")} = 0x5e6f7g8h... (≈ 0.89 × 10^{38}), \text{MD5("file123" || "serverC")} = 0x9i0j1k2l... (≈ 1.45 × 10^{38}). The algorithm selects "serverC" as it has the highest weight. If a node fails, recomputation over the remaining nodes reassigns K to the new highest, affecting only 1/n of keys on average.

Core Properties

Rendezvous hashing, also known as highest random weight (HRW) hashing, exhibits in its mapping process, ensuring that all clients independently compute the same assignment for a given to a without requiring any inter-client communication. This property arises from the stateless nature of the algorithm, where the assignment is solely based on the , the set of available s, and a fixed pseudorandom that generates consistent weights for each key-node pair. A core strength of rendezvous hashing lies in its load balancing capabilities, where keys are expected to distribute evenly across the n available nodes, with each node receiving approximately 1/n of the keys. The probability that a specific node receives a given key is exactly 1/n, as the node with the highest random weight for that key is selected, and the weights are uniformly distributed and independent, making each node equally likely to have the maximum. This even distribution is concentrated around the mean, with the variance of the load fraction being O(1/n) for a single key, and improving with more keys m as the relative deviation approaches zero for m ≫ n. For large m, the maximum load on any node deviates from the expected m/n by O(\sqrt{(m/n) \log n}) with high probability, ensuring balanced loads without excessive variance. The algorithm minimizes remapping upon node changes, such that when a is added or removed, only an O(1/n) fraction of keys are reassigned to different s. This low disruption coefficient of 1/n represents the optimal bound for any hashing scheme, as it matches the expected fraction of keys that would need to move in a balanced system under such changes. Rendezvous hashing avoids hotspots by leveraging random weights assigned via the , which prevent systematic biases in key assignments and ensure that no is disproportionately favored due to patterns in keys or identifiers. As the number of keys grows large relative to the number of nodes, the in loads approaches zero, guaranteeing a smooth distribution without concentration effects. In terms of , rendezvous hashing gracefully handles node failures by simply recomputing the maximum weight over the remaining set of operational nodes for affected keys, resulting in minimal overall disruption since only the keys previously assigned to the failed node require reassignment. This recomputation maintains the same load balancing and low-remapping properties over the reduced node set.

Historical Development

Origins and Invention

Rendezvous hashing, also known as highest random weight (HRW) hashing, was invented by David G. Thaler and Chinya V. Ravishankar at the . The mechanism was first described in a 1996 technical report CSE-TR-316-96 and formally published in 1998. The invention was motivated by the challenges of distributed web caching in the mid-1990s, where clusters of proxy servers needed to coordinate object placement to maximize hit rates and minimize replication without relying on centralized control. Traditional methods like or led to redundant storage and low efficiency in environments with high request repetition, such as WWW proxies. HRW addressed this by providing a decentralized for consistent object-to-server mapping, allowing independent clients to agree on cache locations using only local computations and shared hash functions. This approach was particularly suited to scenarios requiring and , including protocols. The core idea emerged as part of efforts to improve name-based mappings in distributed systems, initially termed the "highest random weight" mechanism for selecting among cache arrays. For a given object name and set of servers, HRW computes a hash of the combined name-server pair for each server and assigns the object to the server yielding the maximum value, ensuring deterministic yet randomized load distribution. This innovation laid the foundation for later adaptations in load balancing and peer-to-peer systems, emphasizing simplicity and minimal disruption during server changes.

Key Milestones and Publications

A key adoption milestone occurred in 1998 with its integration into the Cache Array Routing Protocol (), detailed in an IETF by Vinod Valloppillil. CARP employed rendezvous hashing to distribute HTTP requests across proxy cache arrays, enabling load balancing and coordinated caching in loosely coupled systems, which demonstrated its practical utility in web infrastructure. In the , efficiency optimizations emerged, particularly the skeleton-based hierarchical variant that reduced lookup time from O(n) to O(log n) by constructing a virtual over nodes. This advancement, building on foundational ideas, was explored in applied contexts such as partitioning; for instance, a 2016 study proposed an adaptive module for using this approach to enhance load balancing in distributed databases. Recent developments include the 2023 IETF draft on Weighted Highest Random Weight (HRW) hashing, which extends rendezvous methods to support node weights for uneven load distribution in applications like BGP routing, with updates continuing into 2025.

Key Publications

Efficiency Optimizations

Skeleton-Based Hierarchical Approach

The basic rendezvous hashing algorithm involves computing the hash value for a given key against each of the n nodes and selecting the node with the maximum value, leading to an time complexity per lookup that scales poorly for large clusters with thousands of nodes. This linear scan becomes a in high-throughput distributed systems, prompting the development of efficiency optimizations like the skeleton-based hierarchical approach, which preprocesses the nodes into a to enable sublinear lookup times while preserving the probabilistic balance and minimal remapping properties of rendezvous hashing. The core of this approach is the , a precomputed hierarchical representation of the nodes that reduces the search space during lookups. A balanced is constructed with the physical nodes as leaves and virtual nodes as internal aggregators, where the virtual identifiers for internal nodes are derived from combinations of their children's identifiers (e.g., via or hashing). The skeleton effectively forms a compact overlay of size but with logarithmic depth, built in O(n log n) preprocessing time by recursively partitioning the nodes and assigning virtual labels that allow hash-based decisions at each level. This structure ensures that the captures the full set of nodes without requiring a separate small like sqrt(n) representatives, though the effective "skeleton" depth controls the refinement process. To adapt the algorithm, lookups begin at the tree root and traverse downward by evaluating hashes against virtual child identifiers. For a key K at an internal node with children c1 and c2, the traversal selects the child c that maximizes the hash via the comparison h(K, c) = \max(h(K, c_1), h(K, c_2)), where h is the rendezvous hash function (e.g., a keyed hash like SHA-1). This decision propagates the potential maximum down the tree, refining the candidate set from the full n nodes to a single leaf in O(log n) steps, with each step requiring only two hash computations. Upon reaching a leaf, the selected node is the approximate highest-weight assignee for K, providing an efficient proxy for the exact O(n) maximum. This method achieves O(log n) lookup time after the initial O(n log n) construction, making it suitable for static or semi-static clusters where preprocessing is feasible. In practice, the hierarchical propagation closely approximates the uniform load distribution of basic rendezvous hashing, with deviations bounded by the tree's balance, as demonstrated in partitioning frameworks for scalable systems.

Managing Node Changes and Failures

In the hierarchical variant of Rendezvous hashing, node additions are managed by inserting the new node as a in the virtual , which requires O(log n) time for the insertion operation. Only the keys for which the new node now yields the maximum hash value are remapped to it, affecting an expected fraction of O(1/n) of the total keys, where n is the number of nodes; this minimal disruption ensures that the vast majority of existing mappings remain unchanged. Node failures or removals are handled similarly by excising the affected from the , such as its subtree in the tree-based representation, followed by recomputing the maximum values solely for the keys that were mapped to the removed node. This recomputation is localized to the affected subtree, again impacting only an O(1/n) fraction of keys, and is performed incrementally in O(log n) time per change to avoid global recomputation. Full system rebuilds are infrequent and reserved for rare scenarios like multiple simultaneous failures, as the design prioritizes incremental maintenance to sustain performance. For replication, the algorithm extends naturally to k-replicas by selecting the k nodes with the highest hash values for each key, rather than just the maximum; upon a node failure, any lost replicas are reassigned to the next-highest available nodes from this ordered list, preserving data availability with minimal additional movement. These mechanisms result in low rebuilding costs, with incremental updates enabling rapid recovery; for instance, in a simulated cluster of 100 nodes, the failure of a single node typically requires remapping about 1% of keys, distributed evenly across survivors to restore balance quickly. Post-change, the system retains its core properties of uniform load distribution and low variance in key assignments, ensuring ongoing scalability and fault tolerance without cascading overloads.

Comparisons with Other Hashing Methods

Relation to Consistent Hashing

, introduced as a for distributed caching, maps both keys and s to points on a fixed circular space, often represented by the integer range [0, 2^m - 1] for a large m. A key K is assigned to the N_i whose hash position is the immediate successor to h(K) in the direction around the circle; if no such exists, it wraps around to the first encountered. This approach ensures that only a small fraction of keys are remapped when s are added or removed, minimizing disruption in dynamic environments. Rendezvous hashing, also known as highest random weight (HRW) hashing, generalizes this by assigning a K to the N_i that maximizes an arbitrary h(K, N_i) over all available , without relying on a fixed geometric structure like a . This allows for flexible probability distributions over assignments, as the of h can tailor the load balancing beyond uniform circular placement. In contrast to consistent hashing's rigid successor rule, Rendezvous hashing's max-based selection provides a more abstract and adaptable mechanism for consistent mapping. Mathematically, consistent hashing emerges as a special case of Rendezvous hashing within this general model. Specifically, the circular successor assignment can be reformulated by defining h_{\text{consistent}}(K, N_i) = h(K) \mod 2^m + \text{offset}(N_i), where \text{offset}(N_i) derives from the node's position on ; the node maximizing this value approximates the clockwise successor, effectively emulating the ring structure through the maximization operation. This equivalence holds because monotone ranged hash functions, which underpin , are equivalent to π-hash functions that select the highest-ranked node in a permutation induced by the , as proven in the foundational analysis. Both techniques originate from the same seminal work by Karger et al. on distributed caching protocols to alleviate hot spots on the , where HRW (Rendezvous hashing) was emphasized for its ability to select cache locations via highest random weights, providing a unified probabilistic foundation for consistent mappings in cooperative caching systems.

Advantages and Trade-offs

Rendezvous hashing presents several key advantages relative to alternatives such as , primarily in its straightforward design and superior load distribution properties. Unlike consistent hashing, which relies on a circular space and often requires virtual nodes to achieve balance, rendezvous hashing employs a simple highest-weight selection mechanism that avoids such structures, enabling decentralized, stateless decisions with minimal implementation complexity. This simplicity facilitates easier integration into systems where low overhead and local computation are prioritized. Furthermore, it inherently supports weighted assignments by incorporating server capacities directly into the weight computation, allowing for proportional load distribution without additional algorithmic modifications. In terms of load balance, rendezvous hashing delivers tighter bounds on variance, resulting in more uniform across nodes compared to , even in the presence of failures or changes. For instance, the maximum load deviation remains close to the expected share, with disruption limited to 1/m when a single is added or removed, where m is the number of servers—outperforming the potentially higher remapping in ring-based methods. This leads to reduced hotspots and better worst-case guarantees, particularly beneficial in dynamic environments. However, these benefits come with notable trade-offs. The core algorithm incurs O(m) per lookup due to evaluating hashes across all nodes, contrasting with the O(log m) or amortized O(1) lookups in optimized implementations using sorted rings or trees. For large-scale systems, this linear cost necessitates preprocessing-intensive optimizations like hierarchical skeletons to approximate constant-time operations, increasing upfront computational demands. Consequently, rendezvous hashing scales most effectively in small- to medium-sized clusters (typically under a few hundred nodes), where its balance advantages outweigh the lookup overhead. Rendezvous hashing is ideally suited for use cases emphasizing frequent read operations and minimal inter-node coordination, such as routing in protocols, where uniform selection without centralized enhances reliability and .

Variations and Extensions

Weighted Rendezvous Hashing

Weighted rendezvous hashing adapts the core rendezvous hashing to environments where nodes possess unequal capacities or sizes, by incorporating node-specific weights w_i to bias key assignments accordingly. The probability that a given key K is assigned to node i is designed to be proportional to w_i / \sum_j w_j, ensuring higher-capacity nodes receive a larger share of the load while maintaining the minimal disruption property of the unweighted variant upon node changes. A primary method for achieving this bias involves modifying the score computation without replicating nodes virtually. The weighted score for key K and node n_i is given by \text{score}(K, n_i) = -\frac{w_i}{\log \left( \frac{h(K, n_i)}{H_{\max}} \right)}, where h(K, n_i) is the raw hash value (assumed uniformly distributed in [0, H_{\max}]), and the node with the maximum score is selected as the rendezvous point. This formulation leverages the logarithmic transformation to adjust the effective randomness, making higher-weighted nodes more likely to produce the highest score, with the proportionality holding under mild assumptions on hash uniformity. The algorithm iterates over all nodes to compute and compare these scores, selecting the argmax as in the unweighted case; for top-k selections, weights can influence threshold adjustments to prioritize higher-capacity nodes in the ranking. An alternative approach uses virtual identifiers proportional to weights, where each node n_i is represented by \lceil c \cdot w_i \rceil virtual IDs for some constant c, and hashes are computed against these IDs to find the maximum, with the owning node chosen. This method explicitly enforces the proportional probability by increasing selection opportunities for heavier nodes, though it increases computational overhead linearly with total virtual IDs. The score-based method has O(n) time complexity, where n is the number of nodes. The virtual ID method has time complexity linear in the total number of virtual IDs. For illustration, consider three servers with capacities yielding weights 1, 2, and 3 (sum = 6). Over many keys, approximately 17% assign to the first server, 33% to the second, and 50% to the third, as the weighted scores or virtual counts favor the higher-capacity options; in practice, simulations show variance below 1% for large datasets, demonstrating effective load proportionality.

Protocol-Specific Adaptations

The Cache Array Routing Protocol (), proposed in an IETF draft, employs highest random weight (HRW) hashing, a form of rendezvous hashing, to select proxy cache servers in hierarchical arrays for load balancing HTTP requests and minimizing redundant caching across loosely coupled servers. This adaptation computes a hash value for each URL-server pair, assigning the request to the server yielding the highest value, which ensures deterministic resolution without requiring centralized coordination or additional cache-to-cache protocols. In distributed storage systems, rendezvous hashing adaptations facilitate controlled replication by identifying the top-k nodes with the highest computed weights for each data item, thereby enforcing constraints such as capacity limits, locality preferences, or requirements while maintaining load balance. This top-k selection leverages the inherent probabilistic uniformity of HRW to distribute replicas evenly, reducing hotspots in scenarios where replication factors exceed simple pairwise assignments. Weighted variants of rendezvous hashing have been adapted for (BGP) route selection in anycast deployments, as outlined in an IETF draft from the BGP Enabled Services working group. In (EVPN) multi-homing environments, this approach computes bandwidth-weighted scores for candidate paths, selecting routes that proportionally distribute traffic across peers with varying capacities, thereby enhancing high availability and load balancing in anycast scenarios without significant reconfiguration overhead. The weighted HRW mechanism integrates with BGP attributes to influence path preferences, ensuring minimal disruption during node additions or failures.

Applications and Implementations

Real-World Systems and Use Cases

Rendezvous hashing, also known as highest random weight (HRW) hashing, was originally developed and applied by for distributed caching and load distribution in content delivery networks (CDNs). In the late , Akamai deployed it to map web objects to cache servers, effectively relieving hot spots caused by uneven traffic loads during high-demand events like major news releases. This approach ensured balanced distribution without centralized coordination, handling the scale of global at the time by minimizing remapping disruptions when caches were added or removed. In networking protocols, rendezvous hashing is employed for load balancing, particularly in (EVPN) designated forwarder (DF) election as standardized in RFC 8584. Weighted variants of rendezvous hashing have been proposed by the (IETF) for applications like EVPN DF election, where they account for unequal node capacities to distribute traffic more evenly while supporting dynamic network changes; as of July 2025, this remains an active draft. Distributed storage systems leverage rendezvous hashing for object placement and replication to achieve uniform distribution across nodes. In proposed for scalable , such as round-hashing extensions, it assigns to servers by selecting the node with the highest hash-based score, optimizing for and load balance in large-scale clusters. Beyond core networking and storage, rendezvous hashing supports layer 4 (L4) load balancers for consistent traffic steering. GitHub's Global Load Balancer (GLB) Director, for example, incorporates it to select backend servers based on client and server weights, enabling anycast-style across data centers with minimal reconfiguration during failures. In database sharding, variants of have been enhanced with rendezvous hashing to improve partitioning uniformity; a MapReduce-based integrates it to create virtual hierarchies, distributing records more evenly than traditional methods and reducing hotspots in NoSQL clusters. Recent adoptions highlight its growing role in cloud-native environments. In 2025, rendezvous hashing has been applied in for caching layers, allowing services to map data to nodes dynamically and eliminate single points of failure in distributed applications. This enables efficient pod selection in service meshes by providing stateless agreement on replicas, supporting scalable without heavy coordination overhead.

Practical Implementation Details

Implementing Rendezvous hashing in practice begins with a language-agnostic pseudocode representation of the core algorithm, extended for robustness. The basic procedure computes a hash for each key-node pair and selects the node(s) with the highest score. To handle edge cases, include checks for empty node sets or invalid keys, returning an error or default fallback such as round-robin assignment. Pseudocode example:
function rendezvous_hash(key, nodes, k=1):
    if nodes is empty:
        raise Error("No available nodes")
    if key is invalid (e.g., null or empty):
        return fallback_assignment(nodes, k)  // e.g., first k nodes
    scores = []
    for each node in nodes:
        combined_hash = hash_function(key + node_identifier)
        score = compute_score(combined_hash)  // e.g., treat as float in [0,1] or raw value
        scores.append((score, node))
    sort scores descending
    return top k nodes from scores
This approach ensures graceful degradation, as described in implementations focusing on reliability in distributed environments. Selecting an appropriate is critical for performance, as Rendezvous hashing requires computing hashes for every key-node pair. Non-cryptographic hash functions like or xxHash are recommended for their speed and low collision rates, achieving throughputs of several gigabytes per second on modern hardware while resisting hash flooding attacks without the overhead of like SHA-256. Avoid cryptographic hashes for production use, as they introduce unnecessary —up to 10x slower than xxHash—without benefiting the needs of Rendezvous. For instance, MurmurHash3, a similar fast option, is used in mature libraries for its . For hierarchical implementations, which address scalability in large clusters by organizing nodes into a tree structure, outline the tree building and lookup in Python-like pseudocode. First, partition nodes into levels (e.g., racks as level 1, servers within racks as level 2), then apply Rendezvous hashing successively to descend the tree. Libraries like clohfink/RendezvousHash on GitHub provide building blocks, though full hierarchical support may require custom extension. Example outline:
python
class HierarchicalRendezvous:
    def __init__(self, tree_levels):  # tree_levels: list of node lists per level
        self.levels = tree_levels
    
    def build_tree(self):
        # Pre-group nodes, e.g., level 0: roots, level 1: children
        pass  # Static or dynamic partitioning based on cluster topology
    
    def get_node(self, key, k=1):
        current_set = self.levels[0]  # Start at root level
        for level in range(len(self.levels) - 1):
            scores = [(hash(key + str(node)), node) for node in current_set]
            top_nodes = sorted(scores, reverse=True)[:k]  # Select top k branches
            current_set = [child for _, node in top_nodes for child in node.children]  # Descend
        return current_set[:k]  # Leaf nodes
This reduces time complexity from to O(log N) by limiting evaluations per level. Weighted Rendezvous hashing extends the basic form by incorporating node capacities into score computation, often using a logarithmic adjustment to bias selection toward higher-weight nodes proportionally. A common snippet in Python-like code computes scores as -weight / [log](/page/Log)(hash_value), normalizing the hash to (0,1) to avoid :
python
import math

def weighted_score(hash_value, weight):
    normalized_hash = hash_value / (2**64 - 1)  # Normalize to (0,1)
    if normalized_hash == 0:
        normalized_hash = 1e-10  # Avoid log(0)
    return -weight / math.log(normalized_hash)

def weighted_rendezvous(key, weighted_nodes, k=1):
    scores = []
    for node, weight in weighted_nodes:
        h = hash_function(key + node)
        score = weighted_score(h, weight)
        scores.append((score, node))
    sorted_scores = sorted(scores, reverse=True)
    return [node for _, node in sorted_scores[:k]]
This method ensures load distribution matches weights, as validated in weighted DHT research. Best practices for deployment emphasize efficiency and reliability. Precompute static node identifiers or hashes where possible to minimize per-request overhead, reducing computation by up to 12x in benchmarks with . For large node counts (n > 100), approximate by sampling a of nodes or using hierarchical structures to avoid full . Test for balance by simulating node additions/removals and measuring variance in key assignments across 10,000+ keys, aiming for standard deviations below 1% of average load. In multi-client setups, ensure stateless computation or use thread-safe wrappers, as in C# libraries with non-blocking reads. Key challenges include floating-point precision in weighted variants, where logarithmic normalization can amplify errors near hash boundaries (e.g., log(1e-10) yielding extreme scores); mitigate by clamping normalized values to [ε, 1-ε] with ε=1e-9. Thread-safety in shared implementations requires updates for dynamic lists, as concurrent modifications without locks can lead to inconsistent mappings; dedicated libraries handle this via mutexes or immutable structures.

References

  1. [1]
    [PDF] Consistent Hashing and Random Trees: Distributed Caching ...
    A tool that we develop in this paper, consistent hashing, gives a way to implement such a distributed cache without requiring that the caches communicate ...Missing: Rendezvous | Show results with:Rendezvous
  2. [2]
    Weighted HRW and its applications - IETF
    Feb 22, 2023 · Rendezvous Hashing also known as Highest Random Weight (HRW) has been used in many load balancing applications where the central problem is ...
  3. [3]
  4. [4]
    None
    Summary of each segment:
  5. [5]
    RFC 2991 - Multipath Issues in Unicast and Multicast Next-Hop ...
    ... hash-threshold or HRW. Hash functions such as modulo-N, hash-threshold and HRW are better if the forwarder state may be deleted for any reason during the ...
  6. [6]
    [PDF] Using Name-Based Mappings to Increase Hit Rates - Microsoft
    We also propose a new mapping method called highest random weight (HRW) mapping that meets all the criteria discussed in this section. A. Traditional Goals.
  7. [7]
    Keep clients - Arvados | Documentation
    The priority order is determined using rendezvous hashing, described below. The Keep server returns a block locator (the MD5 sum of the block) and a “signed ...
  8. [8]
    Using name-based mappings to increase hit rates - Semantic Scholar
    ... highest random weight (HRW) mapping that eliminates these difficulties. Given an object name and a set of servers, HRW maps a request to a server using the ...<|control11|><|separator|>
  9. [9]
    draft-ietf-bess-weighted-hrw-02 - Weighted HRW and its applications
    Jul 7, 2025 · This draft deals with the problem of achieving load balancing with minimal disruption when the servers have different weights.
  10. [10]
    [PDF] arXiv:2406.19836v2 [cs.DC] 17 Mar 2025
    Mar 17, 2025 · Thaler, D., Ravishankar, C.V.: A name-based mapping scheme for rendezvous. In: Techni- cal Report CSE-TR-316-96, University of Michigan ...
  11. [11]
  12. [12]
    [PDF] Scalable and Uniform Data Distribution Algorithm for Storage Clusters
    It is a variety of Highest Random Weight [45]. With it, each node has an ... show better characteristics than Consistent Hashing and. Weighted Rendezvous Hashing.Missing: properties | Show results with:properties
  13. [13]
  14. [14]
    US9571570B1 - Weighted rendezvous hashing - Google Patents
    W n, W n+1), where W is a weight of a respective backend server 210-1 to 210 ... The weighted rendezvous hashing that occurs will be discussed in further detail ...
  15. [15]
    draft-vinod-carp-v1-03 - Cache Array Routing Protocol v1.1
    Feb 27, 1998 · This draft documents the Cache Array Routing Protocol (CARP) v1.0 for dividing URL-space among an array of loosely coupled proxy servers.
  16. [16]
    CARP hash in Python - Stack Overflow
    May 18, 2012 · The hash function outputs a 32 bit unsigned integers based on a zero-terminated ASCII input string. The machine name and domain names of the ...What is the best hash function for Rabin-Karp algorithm?perl - How can I change the case of a hash key? - Stack OverflowMore results from stackoverflow.com
  17. [17]
    [PDF] consistent-hashing-and-random-trees-distributed-caching-protocols ...
    We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network.
  18. [18]
    [PDF] Round-Hashing for Data Storage: Distributed Servers and External ...
    May 8, 2018 · Rendezvous hashing, for a given web page p, applies hashing to the pairs hp, ii for each server i, and then assigns p to the server i = i0 that ...Missing: seminal | Show results with:seminal
  19. [19]
    A partitioning framework for Cassandra NoSQL database using ...
    Apr 6, 2017 · Its main goal is to enhance Cassandra's partitioning performance by using Rendezvous hashing that uniformly distributes records of the database ...Missing: sharding | Show results with:sharding
  20. [20]
    Rendezvous Hashing: Peer-to-Peer Approach to Distributed ...
    May 3, 2025 · Rendezvous hashing offers an elegant solution to distributed caching that doesn't require external services.Rendezvous Hashing · The Hashing Module · Distributed Cache Consumer
  21. [21]
    Rendezvous Hashing Explained - Randorithms
    Dec 26, 2020 · Rendezvous hashing is an algorithm to solve the distributed hash table problem - a common and general pattern in distributed systems.
  22. [22]
    darvid/proxenos: A rendezvous hashing and service routing ... - GitHub
    A rendezvous hashing and service routing toolkit for Python. - darvid ... Can be set to one of md5 , sha1 , sha224 , sha256 , sha384 , sha512 ...
  23. [23]
    xxHash - Extremely fast non-cryptographic hash algorithm
    xxHash is an extremely fast non-cryptographic hash algorithm, working at RAM speed limit. It is proposed in four flavors (XXH32, XXH64, XXH3_64bits and XXH3_ ...Missing: Rendezvous | Show results with:Rendezvous
  24. [24]
    Consistent Hashing - Synnada Glossary
    Nov 1, 2023 · Consistent hashing requires balanced, randomized hashes. Algorithms like rendezvous hashing improve on basic consistent hashing. Fast non-crypto ...
  25. [25]
    clohfink/RendezvousHash: Rendezvous or Highest ... - GitHub
    An alternative to the ring based, consistent hashing. This is a fast thread safe implementation of Rendezvous (Highest Random Weight, HRW) hashing.Missing: survey | Show results with:survey<|control11|><|separator|>
  26. [26]
    Rendezvous Hashing – alternative to Consistent Hashing
    May 6, 2023 · There is another variant called “hierarchical rendezvous hashing” which basically organises the nodes into a tree like structure. This is to ...Missing: motivation seminal
  27. [27]
    Rendezvous hashing: my baseline "consistent" distribution method
    Sep 24, 2017 · Basic rendezvous hashing takes a distribution key (e.g., a filename), and a set of destinations (e.g., hostnames). It then uses a hash function ...
  28. [28]
    farazdagi/hrw-hash: A minimalistic implementation of the ... - GitHub
    A minimalistic implementation of the Highest Random Weight (HRW) aka Rendezvous hashing ... Thaler and Ravishankar (1996). The weighted variant of the HRW ...
  29. [29]
    Rendezvous Hashing: The Path to Faster Hashes Calculation
    Dec 23, 2024 · Rendezvous hashing (also called highest random weight or HRW hashing) is an algorithm that helps clients agree on K choices from a set of N possible options.