Fact-checked by Grok 2 weeks ago

Linear probing

Linear probing is a collision resolution strategy employed in open-addressing hash tables, a data structure for storing key-value pairs where all elements reside directly in an array of fixed size. Upon encountering a collision during insertion—when the computed hash index is already occupied—the algorithm probes sequentially forward from that position by incrementing the index by 1 modulo the table size until an empty slot is found, at which point the new element is inserted. This method, also known as linear open addressing, contrasts with chaining by avoiding linked lists and instead redistributing collided elements within the primary array. For searching and deletion, linear probing follows the same probe sequence as insertion to maintain . In a search, starting from the initial index, the algorithm checks each subsequent until it finds the target , an empty (indicating absence), or a full of the . Deletion requires marking slots as "deleted" rather than emptying them, to preserve probe paths for future operations; a full removal occurs only during rehashing when the table is resized, typically doubling in capacity and reinserting all elements into a new array. This approach ensures that searches and insertions remain efficient as long as the load factor α (ratio of elements to size) stays below approximately 0.75. One key advantage of linear probing is its simplicity in implementation and favorable locality, as probes access consecutive locations, potentially outperforming chained hashing in modern . However, it suffers from primary clustering, where consecutive occupied slots form dense groups, lengthening probe sequences for nearby keys and degrading performance as the table fills; this can lead to up to worst-case time for operations in highly loaded tables. To mitigate clustering, variants like or adjust the probe step size, though linear probing remains widely used in applications prioritizing speed over load tolerance. Performance analysis under the uniform hashing assumption reveals that the average number of probes for a successful search in a linear probing is \frac{1}{2} \left(1 + \frac{1}{1 - \alpha}\right), while unsuccessful searches average \frac{1}{2} \left(1 + \frac{1}{(1 - \alpha)^2}\right), highlighting the quadratic sensitivity to load factor. These formulas, derived from probabilistic models, underscore the importance of resizing before α exceeds 0.5 for balanced efficiency.

Background

Hash tables and collision resolution

A is a that stores key-value pairs in an of fixed size, using a to compute an index for each key, thereby enabling average-case constant-time O(1) performance for insertions, deletions, and lookups. The transforms a key into a non-negative within the range of array indices, ideally distributing keys uniformly to minimize overlaps. Collisions occur when two or more distinct keys map to the same array index via the hash function, a situation inevitable given the finite table size relative to the potentially vast key space. For example, in a table of size 10, the keys "" and "" might both hash to 4, requiring a to maintain and accessibility. To address collisions, hash tables employ resolution techniques such as separate chaining or . In separate chaining, each slot holds a , with colliding keys appended to the list at the hashed index; this approach simplifies implementation and supports high occupancy without severe degradation, though it incurs extra for list pointers and may involve traversal costs. Conversely, open addressing resolves collisions by searching for an unoccupied slot within the itself, promoting efficiency by eliminating pointer overhead, but risking longer sequences as the table fills, which can amplify access times. The load factor α, defined as the number of keys n divided by the size m (α = n/m), quantifies table utilization and directly influences ; values approaching 1 heighten resolution demands, potentially eroding the O(1) guarantee. Linear probing serves as one within the paradigm.

Open addressing overview

Open addressing, also known as closed hashing, is a collision resolution technique in hash tables where all elements are stored directly within the table's , without using external lists or chains. When a collision occurs—meaning the computed hash index for a is already occupied—the resolves it by systematically probing subsequent slots in the array to find an empty one, guided by a secondary probe function h(k, i), where k is the key and i is the probe count starting from 0. This approach was first formalized as a practical for random-access storage systems. Unlike separate chaining, which appends collided keys to linked lists at each array slot and incurs overhead from pointer storage, open addressing maintains a single contiguous array, reducing memory usage by avoiding pointers and enabling more compact data representation. However, this efficiency comes at the cost of potentially longer probe sequences as the load factor—the ratio of stored elements to array size—increases, since unsuccessful searches must traverse occupied slots until an empty one or the key is found. Several variants of differ in how the sequence is generated to mitigate clustering, where consecutive probes tend to examine slots. Linear probing uses a constant step size of 1, advancing sequentially from the initial position. employs a step size of i^2, producing a nonlinear sequence that spreads probes more evenly. utilizes a secondary hash function to determine the step size, typically h_2(k), yielding the probe offset as i \times h_2(k) \mod m, where m is the table size, for greater permutation quality. The general insertion procedure in open addressing follows this skeleton:
function insert(key k):
    i ← 0
    repeat:
        slot ← (h₁(k) + [probe](/page/P.R.O.B.E.)(k, i)) mod m
        if table[slot] is empty or is deleted:
            table[slot] ← k
            return
        i ← i + 1
    until i == m  // table full, handle resize or failure
Here, h_1 is the primary hash, and is the variant-specific function (e.g., probe(k, i) = i for and follow a similar loop, terminating on match, empty slot, or exhaustion. Open addressing offers advantages in modern systems, particularly cache efficiency, as probe sequences often access contiguous memory locations, improving locality and reducing cache misses compared to scattered chain traversals. Nonetheless, it presents challenges in deletion, where simply removing an element could disrupt probe paths for other keys, necessitating markers like tombstones to preserve sequence integrity during searches. Additionally, the method is prone to clustering, where insertions form dense groups of occupied slots, exacerbating probe lengths and limiting effective load factors to around 0.7–0.8 for practicality.

Core operations

Search procedure

In linear probing, the search procedure begins by computing the initial hash position h(k) = \hash(k) \mod m, where k is the key, \hash is the hash function, and m is the table size. The algorithm then probes sequentially forward from this position using the probe sequence h(k, i) = (h(k) + i) \mod m for i = 0, 1, 2, \dots, examining each slot until it either locates the key k, encounters an empty slot (indicating the key is absent), or completes a full cycle through the table (indicating the table is full and the key is not present). The following pseudocode illustrates the search algorithm, incorporating the loop termination conditions:
Algorithm Search(table T, key k, size m)
    i ← 0
    while true do
        j ← (hash(k) + i) mod m
        if T[j] is empty then
            return not found  // Key absent
        if T[j].key = k then
            return found  // Key located at j
        i ← i + 1
        if i = m then
            return not found  // Table full, key absent
    end while
This procedure ensures that all potential positions in the probe sequence are checked without prematurely stopping unless justified by an empty slot or full cycle. Consider a hash table of size 4 with initial empty slots, after insertions of "apple" (hash value 1) and "banana" (hash value 3), resulting in slots: [empty, "apple", empty, "banana"]. Searching for "apple" starts at index 1, finds the key immediately, and returns success after one probe. In contrast, searching for "cherry" (hash value 0) probes index 0 (empty), terminates immediately, and returns not found. To support deletions without disrupting searches, deleted slots are marked as occupied (e.g., with a special "deleted" indicator) rather than empty, requiring the search to continue probing past them until an empty or the is found.

Insertion process

In linear probing, the insertion process begins by computing the initial value h'(k) for the k, where the hash table T has size m. The algorithm then follows the probe sequence h(k, i) = (h'(k) + i) \mod m for i = 0, 1, \dots, sequentially checking each starting from h'(k). If the k is found at any , the insertion terminates without adding a duplicate. Otherwise, the is placed in the first empty encountered (typically denoted as NIL). If the probing completes a full through all m slots without finding an empty one, the table is considered full, and an overflow error is raised. The following outlines the insertion procedure, adapted from standard open-addressing implementations:
HASH-INSERT(T, k)
1  i ← 0
2  repeat
3      j ← (h'(k) + i) mod m
4      if T[j] is empty
5          T[j] ← k
6          return j
7      else if T[j] = k
8          return j  // [key](/page/Key) already present
9      i ← i + 1
10 until i = m
11 error "[hash table](/page/Hash_table) overflow"
This procedure ensures that insertions respect the probe sequences used for searches, preventing overwrites of existing keys. To maintain performance, practical implementations monitor the load factor \alpha = n/m (where n is the number of elements) and suggest resizing the table—typically by doubling m and rehashing all elements—when \alpha exceeds a like 0.7, as higher loads degrade probe lengths significantly in linear probing. For example, consider a of size m = 5 with an initial where h'(\text{"dog"}) = 2, and slots 2 and 3 already occupied by other keys. The sequence starts at slot 2 (occupied), moves to slot 3 (occupied), then slot 4 (empty), inserting "dog" there. The insertion process relies on the same probing mechanism as search to detect duplicates and locate empty , ensuring consistency across operations without overwriting valid entries.

Deletion handling

Deletion in linear probing hash tables presents a because simply setting a to empty after removing a key can disrupt the probe sequences of other keys that would have continued searching through that position, potentially causing those keys to become unfindable during subsequent searches. To address this, the standard approach uses special markers known as tombstones or "deleted" flags placed in the slots of removed keys. These markers indicate that the slot is available for future insertions but is treated as occupied during searches to preserve the integrity of probe sequences. The deletion algorithm begins by performing the standard linear probing search to locate the . If the key is found, the slot's value is replaced with the deleted marker rather than being set to empty. Searches and insertions are modified accordingly: during a search, if a tombstone is encountered, the probing continues as if the slot were occupied by a non-matching key; for insertion, tombstones are considered available slots, allowing new keys to be placed there and potentially reusing the space without breaking existing chains. The following illustrates the deletion process:
function DELETE(key):
    i ← 0
    start ← [HASH](/page/Hash)(key) [mod](/page/Mod) m
    while true do
        j ← (start + i) [mod](/page/Mod) m
        if [table](/page/Table)[j] is empty:
            return  // key not found
        if [table](/page/Table)[j] = key:
            [table](/page/Table)[j] ← deleted  // place tombstone
            return
        i ← i + 1
        if i = m:
            return  // table full, key not found
    end while
This ensures that probe sequences remain continuous for other elements. However, repeated deletions lead to an accumulation of tombstones, which effectively increase the load factor by occupying slots without holding useful , thereby lengthening average probe sequences and degrading overall performance over time. To mitigate this, implementations often monitor the number of tombstones and trigger periodic rehashing into a larger or compaction to remove them when they reach a , such as half the table size. Consider an example with a using linear probing and a simple where both "apple" and "banana" hash to position 2. "Apple" is inserted first at 2. "Banana" collides at 2, probes to 3 (empty), and is inserted there (position 1 and 4+ empty). Deleting "apple" at position 2 by setting it to a tombstone allows a search for "banana" to start at 2 (tombstone, continue probing), reach 3, and find it correctly, whereas setting position 2 to empty would stop the search prematurely. Subsequent insertion of a new key hashing to 2 could then reuse the tombstone slot.

Performance analysis

Load factor impact

The load factor α in linear probing is defined as the ratio of the number of stored elements n to the table size m, or α = n/m. Linear probing achieves efficient performance when α remains below 0.7–0.8, but degrades sharply as α nears 1.0 due to substantially longer probe sequences required for operations. To sustain efficiency, implementations typically resize the table by doubling its size and rehashing all elements when α exceeds a such as 0.7, which keeps the load factor in a where probe lengths stay manageable. Empirical analysis under uniform hashing reveals that at α = 0.5, the expected probes for a successful search average about 1.5, while unsuccessful searches or insertions require around 2.5; at α = 0.9, these rise to approximately 5.5 and over 50 probes, respectively, highlighting the method's vulnerability at high loads.
Load Factor αSuccessful Search (Probes)Unsuccessful Search (Probes)
0.251.21.4
0.501.52.5
0.672.05.0
0.752.58.5
0.905.550.5
Compared to chaining, linear probing in open addressing is more sensitive to elevated α since the table fills to capacity at α = 1, but offers speed advantages at low α by eliminating pointer traversals in linked lists.

Probe sequence length

In linear probing, the expected number of probes for an unsuccessful search is given by \frac{1}{2} \left(1 + \frac{1}{(1 - \alpha)^2}\right), where \alpha denotes the load factor, representing the ratio of occupied slots to total table size. This formula arises from probabilistic analysis assuming uniform hashing, where the positions of empty slots follow a geometric distribution with success probability $1 - \alpha. Specifically, the probability that a probe sequence requires exactly k probes is (1 - \alpha) \alpha^{k-1} under the approximation of independent probes, leading to the expected value via summation: \sum_{k=1}^{\infty} k (1 - \alpha) \alpha^{k-1} = \frac{1}{1 - \alpha}; however, the more precise derivation in linear probing accounts for the sequential nature by averaging over the squared term to yield the quadratic form. For a successful search, the expected probe length simplifies to \frac{1}{2} \left(1 + \frac{1}{1 - \alpha}\right), derived similarly by considering the average position of inserted keys relative to their hash locations, again rooted in the geometric distribution of empty slots encountered during insertions. This reflects the fact that successful searches tend to probe fewer locations on average, as they terminate upon finding the key rather than an empty slot. In the worst case, probe sequences can require up to O(m) probes, where m is the table size, occurring when the table is nearly full or when long clusters force traversal of most slots. The distribution of probe lengths approximates an (or geometric) form at moderate load factors \alpha. Theoretical predictions from these equations align closely with empirical results from simulations for load factors up to \alpha = 0.8, beyond which clustering effects begin to deviate outcomes from the ideal model.

Clustering issues

Primary clustering explanation

Primary clustering is a phenomenon observed in linear probing hash tables, where keys that hash to nearby indices tend to form long, contiguous runs of occupied slots, thereby extending the probe sequences for subsequent insertions and searches in that region of the table. This occurs because the probe sequence in advances by a fixed step size of 1, causing collided keys to sequentially fill adjacent slots and create dense clusters around the initial hash locations. The root cause stems from this uniform linear progression: when multiple keys hash to slots in close proximity—say, indices 5, 6, or 7—they occupy a continuous block starting from the earliest available slot in that range, forcing later probes to traverse the entire before finding an empty slot or the target . For instance, consider a of size 13 with initial hashes placing s at positions 5 (occupies 5), then 6 (occupies 6), and another at 5 (collides and occupies 7); a subsequent hashing to 7 now must probe through 7 (occupied), 8, and beyond, extending the effective run to slots 5–7 and growing the further with each similar collision. To illustrate cluster growth, envision a linear representing the slots (0 to 12), initially all empty (denoted as ·). After inserting a hashing to slot 3 (occupies 3: · · · X · · · · · · · · ·), a second hashing to 4 (occupies 4: · · · X X · · · · · · · ·), and a third hashing to 3 (probes to 5: · · · X X X · · · · · · ·), a forms at positions 3–5. A fourth hashing to 2 (if empty, occupies 2, but if another collision arises nearby, it joins or extends the run: · · X X X X · · · · · · ·). Over repeated insertions near this region, the expands contiguously, as shown by the progressive filling of sequential . This primary clustering in linear probing differs from secondary clustering seen in other open-addressing schemes like or , where the issue arises specifically when multiple keys share the exact same initial hash value and thus follow identical probe sequences, rather than from the adjacency of different hash values.

Effects on performance

Primary clustering degrades the performance of linear probing by creating long runs of occupied slots, which significantly increase the length of probe sequences in affected regions. In the worst case, insertions or searches starting near a cluster can require traversing the entire run, leading to probe counts that approximate O(1/(1 - \alpha)^2) for unsuccessful operations at load factor \alpha, far exceeding the expected O(1) amortized time under assumptions. This results in practical average probe lengths that are 2-3 times longer than those in clustering-resistant methods like at moderate load factors, with variance in access times rising due to some operations resolving quickly while others suffer extended traversals through s. The sequential nature of linear probing probes benefits cache locality by accessing contiguous memory locations, often yielding fewer cache misses than scattered probing schemes. However, in clustered regions, long probe sequences can span multiple cache lines, incurring misses at cluster boundaries and amplifying the overall slowdown, particularly for larger table sizes where spatial locality advantages diminish. Empirical analyses confirm these impacts, showing linear probing's average unsuccessful probe length rising sharply with clustering. For instance, at \alpha = 0.7, linear probing requires approximately 6 probes on average for unsuccessful searches, compared to about 3.3 for double hashing, representing a roughly 80% slowdown in probe count due to clustering effects. At higher loads like \alpha = 0.9, this gap widens to 5 times more probes (50.5 vs. 10). The following table summarizes average unsuccessful probe lengths from theoretical models and simulations, highlighting the degradation:
Load Factor (\alpha)Linear Probing (Clustered)Double Hashing (Uniform)Slowdown Factor
0.52.52.01.25x
0.76.13.31.85x
0.950.510.05.05x
These results underscore how clustering transforms linear probing's theoretical efficiency into practical bottlenecks at loads above 0.5. Deletions compound these issues through the use of tombstones (marked deleted slots), which do not create gaps but force subsequent probes to continue past them, effectively extending cluster lengths and increasing the number of steps needed to find empty slots or confirm absences. This raises the effective load factor and probe counts, with analyses showing insertions can take \Theta(x^2) time at load factor 1 - 1/x without , though tombstones introduce some anti-clustering in mixed workloads. In practice, frequent deletions can inflate average search times beyond insertion-only scenarios at \alpha = 0.7, necessitating periodic rehashing to restore .

Hash function selection

Ideal hash function properties

For optimal performance in linear probing, a hash function must generate values that are uniformly distributed across the table indices [0, m-1], where m denotes the table size, thereby reducing the probability of initial collisions among inserted keys. This property aligns with the simple uniform hashing assumption, under which each key hashes independently and with equal probability to any slot, promoting even spread and minimizing early clustering tendencies. Furthermore, the should demonstrate the , such that even a minor alteration in the input key—such as flipping a single bit—produces a markedly different value, on average changing roughly half the output bits to disrupt potential patterns in key similarities. This diffusion quality ensures that related keys do not predictably map to nearby slots, which is particularly beneficial in linear probing where sequential searches amplify localized biases. To avoid inherent correlations between hash outputs and the linear probe increments (which could exacerbate natural clustering in ), the function requires a of statistical from the probing mechanism; establishes that at least 5-wise hashing suffices to achieve expected constant-time operations by bounding probe sequence lengths effectively. Such well-designed functions interact favorably with the load factor α (the ratio of occupied slots to table size), as their uniformity and postpone the buildup of probe chains, sustaining efficient insertions and searches at higher α values—typically up to around 0.5 to 0.7—before degradation sets in. In implementation, the uniformity of candidate hash functions is empirically assessed via the chi-squared goodness-of-fit test, which compares the observed frequency of hash values across bins against the expected , with low p-values indicating non-random deviations that could impair probing .

Practical choices and examples

For integer keys, a simple and common in linear probing tables is the operation, defined as h(k) = k \mod m, where k is the key and m is the table ; this maps keys uniformly to slots when m is chosen appropriately. For example, in a table of 10, inserting keys 5, 15, and 25 would all hash to slot 5, requiring linear probing to resolve collisions by checking subsequent slots. For string keys, polynomial rolling hashes are widely used to treat the string as a base-b number in base-m arithmetic, computed as h(s) = \left( \sum_{i=0}^{n-1} s \cdot b^{n-1-i} \right) \mod m, where s is the code at position i, n is the string length, and b is typically 31 or 37 to balance computation speed and distribution quality. This approach avoids issues by computing incrementally: initialize hash to 0, then for each c, update as \text{hash} = (\text{hash} \cdot b + c) \mod m.
java
int hash = 0;
for (int i = 0; i < s.[length](/page/Length)(); i++) {
    hash = (hash * 31 + s.charAt(i)) % m;
}
In implementations like those in educational algorithms libraries, 's built-in String.hashCode() method—which uses a polynomial rolling with base 31—is directly adapted for linear probing by taking the and m, ensuring compatibility with the table size while leveraging the language's standard for hashing. For instance, the LinearProbingHashST class computes the initial probe as hash = (key.hashCode() & 0x7fffffff) % m, then applies linear offsets. Practical trade-offs favor simple or hashes for their computational speed in general-purpose linear probing tables, as they require only basic operations and achieve good uniformity without excessive overhead. In contrast, non-cryptographic hashes like offer superior avalanche properties for better distribution against patterned inputs, making them suitable for high-performance scenarios, but they involve more complex bit manipulations that increase computation time, rendering them overkill for typical non-security-sensitive uses. Cryptographic hashes, such as SHA-256, provide strong but are significantly slower due to their security-focused design, and are rarely used in linear probing unless is paramount. During rehashing upon table resize, selecting a new size m that is prime is essential to preserve uniformity, as primes minimize systematic biases in the distribution for common key patterns, reducing clustering in linear probing. For example, resizing from 100 to the next prime (e.g., ) ensures that previously clustered probes spread more evenly under the same .

Historical development

Origins and key contributors

Linear probing emerged in the early as a collision resolution technique within schemes for hash tables, developed by a team of researchers including , Elaine M. Boehme (later McGraw), , and . Their work was part of an assembler program for the , one of the first commercial scientific computers, where efficient management required handling key collisions without external storage overflow. This innovation allowed for simple, memory-resident lookups by sequentially probing adjacent slots, addressing the limitations of early hashing methods that relied on or rehashing. The method gained formal recognition through W. Wesley Peterson's seminal 1957 paper, "Addressing for Random-Access Storage," published in the IBM Journal of Research and Development. In this work, Peterson defined open addressing and provided the first detailed analysis of linear probing's performance for random-access file systems, estimating access times and highlighting its advantages in speed and flexibility over traditional sequential or sorted indexing. Independently, Soviet computer scientist Andrey P. Ershov described a similar linear open addressing approach in 1958 (originally presented in 1957), based on empirical tests for programming language implementations. Linear probing arose amid the transition from punched-card to direct-access storage devices in the , where punched-card systems demanded faster, non-sequential data retrieval for indexing large files without exhaustive . Unlike earlier methods that processed records in fixed order, linear probing enabled approximate , reducing I/O operations in emerging and disk-based systems. By the early , the technique saw adoption in IBM's file organization and database prototypes, incorporating hashing principles for efficient record retrieval in commercial applications. In the 1970s, refinements to linear probing addressed key operational challenges, particularly the handling of deletions. formalized the use of tombstones—special markers for deleted entries—to preserve the integrity of probe sequences during searches and insertions, preventing the disruption of subsequent probes that could occur if slots were simply emptied. This approach ensured that deletions did not break the chain of occupied slots, maintaining the expected performance bounds analyzed in Knuth's work. By the 1980s, linear probing gained practical adoption in programming languages and standard libraries. The hsearch function, part of the standard and implemented in systems like C Library, utilized linear probing for open-addressing hash tables, providing a simple mechanism for managing key-value pairs in resource-constrained environments. This integration highlighted linear probing's appeal for straightforward implementations without dynamic memory allocation. Linear probing's primary clustering issues, where consecutive probes form long runs of occupied slots, directly inspired related collision resolution methods. , which employs a secondary to compute variable probe steps, emerged as an early variant to mitigate this clustering by distributing probes more uniformly across the table. As a modern descendant, (introduced in 2001) builds on probing concepts by allowing key relocations between multiple hash locations, achieving constant-time worst-case lookups while avoiding the linear scan's degradation. Post-2000 optimizations further evolved linear probing for contemporary hardware. hashing, proposed in 2008, enhances locality by restricting entries to a fixed "neighborhood" around their ideal hash position, reducing cache misses and supporting higher load factors than standard linear probing without extensive rehashing. These advancements addressed lingering inefficiencies in cache-sensitive applications. Despite these developments, linear probing has declined in favor of separate chaining for high-load scenarios, where clustering causes probe lengths to grow quadratically, leading to performance bottlenecks beyond load factors of 0.7. It persists in modern uses like CPython's implementation (as of 3.12), which employs with probing variants valued for their cache efficiency and simplicity, and in embedded systems where pointer-free structures minimize memory overhead.

References

  1. [1]
    Linear probing – Clayton Cafiero - University of Vermont
    Jan 5, 2025 · Linear probing is a collision resolution strategy. When a collision occurs on insert, we probe the hash table, in a linear, stepwise fashion, to find the next ...
  2. [2]
    15.6. Collision Resolution - OpenDSA
    The simplest approach to collision resolution is simply to move down the table from the home slot until a free slot is found. This is known as linear probing.
  3. [3]
    [PDF] Collision Resolution
    hash table is called “Collision”. Collisions can be reduced with a selection ... When collisions occur, linear probing can always find an empty cell. But ...
  4. [4]
    [PDF] Lecture 4: Hashing, Chaining, and Probing Analysis - Rice University
    Sep 10, 2024 · N2 . Linear probing is a collision resolution technique in hash tables where, instead of forming a chain when a collision occurs, the object is ...
  5. [5]
    [PDF] CMSC 420: Lecture 16 Hashing - UMD Computer Science
    A rule of thumb is that as long as the table remains less than 75% full, linear probing performs fairly well. Nonetheless, the issue of primary clustering ...<|control11|><|separator|>
  6. [6]
    [PDF] Lecture 10: Hash Collision Resolutions - Washington
    Resolves collisions by choosing a different location to store a value if natural choice is already full. Type 1: Linear Probing. If there is a collision, keep ...
  7. [7]
    [PDF] CSE 100: HASHING
    Linear probing collision resolution leads to clusters in the table, because if two keys collide, the next position probed will be the same for both of them. • ...
  8. [8]
    [PDF] Hash Tables - GMU CS Department
    Collision Resolution Method II: Open Addressing. • If h(key) = h0 and ... When the table becomes about half full, linear probing has a tendency toward ...
  9. [9]
    [PDF] Hash Tables - Algorithms, 4th Edition
    Under uniform hashing assumption, the average # of probes in a linear probing hash table of size M that contains N = α M keys is: Pf. Parameters. ・M too large ...<|control11|><|separator|>
  10. [10]
    [PDF] Hash Table Analysis - Rose-Hulman
    For a proof, see Knuth, The Art of Computer Programming, Vol 3: Searching ... Reminder: Linear probing: ▫ Collision at H? Try H, H+1, H+2, H+3 ...
  11. [11]
    [PDF] Chapter 5. Hash Tables
    Figure 4: Separate chaining method of collision resolution. The load factor,λ, is defined as the ratio of the number of items in the table to the table size.
  12. [12]
    [PDF] Lecture 16 - UCSD CSE
    • this is called "separate chaining". • it is also called "open hashing". • Collision resolution becomes easy with separate chaining: just insert a key in its ...
  13. [13]
    9.4. Separate Chaining — Data Structures and Algorithms
    Collision resolution techniques can be broken into two classes: separate chaining (also called open hashing) and open addressing (also called closed hashing). ( ...
  14. [14]
    Addressing for Random-Access Storage | Semantic Scholar
    Addressing for Random-Access Storage. @article{Peterson1957AddressingFR ... W. Wesley Peterson}, journal={IBM J. Res. Dev.}, year={1957}, volume={1 ...
  15. [15]
    The art of computer programming: sorting and searching (volume 3 ...
    The art of computer programming: sorting and searching (volume 3) · 2,291 Citations · Related Papers · What Is Semantic Scholar?
  16. [16]
    Universal Classes of Hash Functions - Semantic Scholar
    Universal Classes of Hash Functions · L. Carter, M. Wegman · Published in Journal of computer and… 1 April 1979 · Computer Science, Mathematics · J. Comput. Syst.
  17. [17]
    [PDF] How Caching Affects Hashing - CS@Cornell
    In this paper we study the performance trade- offs that exist when implementing open address hash func- tions on contemporary computers. Experimental analyses.
  18. [18]
    [PDF] On Deletions in Open Addressing Hashing - UPCommons
    Techniques for collision resolution in hash tables with open addressing. In Proc. of the 1986 ACM Fall. Joint Computer Conference, pages 601–610. IEEE ...
  19. [19]
    [PDF] Chapter 19 Hashing - CMU School of Computer Science
    Open address hashing using so-called linear probing has an important practical advantage over separate chaining: it causes fewer cache misses since typically ...
  20. [20]
    [PDF] Hash Tables
    Search with Linear Probing. ❑ Consider a hash table A that uses linear probing. ❑ get(k). ▫ We start at cell h(k). ▫ We probe consecutive locations until ...
  21. [21]
    Open Addressing: Handling collision in hashing - Emory CS
    Pseudo code for linear probing: start = HashValue( key ); // Start looking in the Hash location search_i = start; // Start here while ( bucket[sreach_i] ...Missing: algorithm | Show results with:algorithm
  22. [22]
    [PDF] CS 3137 Class Notes 1 What is Hashing?
    In linear probing, we use an increment of 1 each time. pseudocode for find using linear probing find(key) try = H = hash (key) i=1 /*increment per linear probe ...
  23. [23]
    None
    Below is a merged summary of the HASH-INSERT procedure from Chapter 11, Section 11.4 (Open Addressing) of Cormen’s *Introduction to Algorithms*. The information is consolidated into a detailed narrative and supplemented with a table in CSV format to capture all key details, including variations in pseudocode and useful URLs. This ensures all information from the provided summaries is retained in a dense and organized manner.
  24. [24]
    19.5 Resizing & Hash Table Performance | CS61B Textbook
    Jul 16, 2024 · This is usually done by doubling the underlying array length. Java's default maximum load factor is 0.75 which provides a good balance between a ...
  25. [25]
    [PDF] Lecture 24 — Hash Tables - CMU School of Computer Science
    Nov 21, 2013 · uniform hashing, the performance of linear probing degrades significantly when the load factor increases: λ. 1/4 1/2 2/3 3/4 9/10 successful.
  26. [26]
    None
    ### Extracted Table: Expected Number of Probes for Linear Probing vs Load Factor λ
  27. [27]
    [PDF] Contents 1 Open-address hash tables - People | MIT CSAIL
    6.006 Introduction to Algorithms. Recitation 10. March 10th, 2014. Contents. 1 Open-address hash tables ... Linear probing is the simplest way of ...
  28. [28]
    [PDF] Tabulation Based 5-Universal Hashing and Linear Probing
    The average probe count ranges from 3.23 to 3.26. The much smaller variance for 5- universal hashing is expected, because the result from [11] also limits the ...<|control11|><|separator|>
  29. [29]
    [PDF] CSE 332 - Washington
    With linear probing, we saw primary clustering (keys hashing near each other). Quadratic Probing fixes this by “jumping”. Unfortunately, we still get ...
  30. [30]
    Hashing Review - andrew.cmu.ed
    Linear probing is easily implemented, but often suffers from a problem known as primary clustering. Although the hashn function should uniformly distribute the ...
  31. [31]
    [PDF] CMSC 420: Lecture 11 Hashing - Handling Collisions
    Fig. 3: Linear probing. This phenomenon is called secondary clustering. Primary clustering happens when multiple keys hash to the same location.
  32. [32]
    Linear Probing Revisited: Tombstones Mark the Death of Primary ...
    Jul 2, 2021 · We show that primary clustering is not a foregone conclusion. We demonstrate that small design decisions in how deletions are implemented have dramatic effects.
  33. [33]
    [PDF] Algorithms - cs.Princeton
    Nov 3, 2021 · Analysis of linear probing. Proposition. Under uniform hashing assumption, the average # of probes in a linear-probing hash table of size m ...
  34. [34]
    CS202 Lecture notes -- Hashing - UTK-EECS
    Sep 8, 2021 · ... Probing is superior to Linear Probing on this data. Cryptographic Hash Functions and the Avalanche Effect. There are several well-known ...
  35. [35]
    [cs/0612055] Linear Probing with Constant Independence - arXiv
    Dec 8, 2006 · Already Carter and Wegman, in their seminal paper on universal hashing, raised the question of extending their analysis to linear probing.
  36. [36]
    [PDF] CS 310: Hashing Basics and Hash Functions - GMU CS Department
    ▷ Alternative hash functions for strings? Page 15. Polynomial Hash Code Tricks. String uses a polynomial hash code ... this.hash = 31 * this.hash + this.str[i];.
  37. [37]
    [PDF] CMSC 341 Lecture 17 Hashing, Parts 1 & 2 - UMBC
    Apr 24, 2017 · Hash Functions – Strings. ▫ Here is a decent hash for string keys int hashVal = 0; for(char in string):. hashVal = (37 * hashVal + ASCII_of_char.
  38. [38]
    LinearProbingHashST.java - Algorithms, 4th Edition
    Nov 27, 2022 · The {@code LinearProbingHashST} class represents a symbol table of generic key-value pairs. It supports the usual <em>put</em>, <em>get</em>, <em>contains</em> ...
  39. [39]
    [PDF] Hashing
    A good hash function should distribute the keys uniformly into the slots of the table. • Regularity in the key distribution should not affect this uniformity. • ...
  40. [40]
    [PDF] Sorting & Searching § Chapter 6, Searching: Hashing: History
    Up to this time linear probing was the only type of open addressing scheme ... Both linear hashing and extendible hashing are prefer- able to the B-trees ...
  41. [41]
    Addressing for random-access storage - ACM Digital Library
    Addressing for random-access storage. Author: W. W. Peterson. W. W. Peterson. View Profile. Authors Info & Claims. IBM Journal of Research and Development, ...
  42. [42]
    (PDF) The Legendary IBM 1401 Data Processing System
    Aug 6, 2025 · We cover IBM's development of the 1401's basic enabling technologies and trace its origins in business accounting machines.
  43. [43]
    The Art of Computer Programming (TAOCP)
    This PDF includes the complete indexes of Volumes 1, 2, 3, 4A, and 4B, as well as the index to Volume 1 Fascicle 1. Registered owners of the earlier four-volume ...
  44. [44]
    The analysis of double hashing(Extended Abstract)
    In this paper we analyze the performance of a well known algorithm known as double hashing [Knuth]. In this method we probe the hash table along arithmetic ...
  45. [45]
    [PDF] Cuckoo Hashing
    Dec 8, 2003 · Double Hashing. Insertion and lookup are similar to Linear Probing, but instead of searching for the next position one step at a time, a second ...
  46. [46]
    [PDF] Hopscotch Hashing - People | MIT CSAIL
    Linear probing [5] is an open-addressed hash scheme in which items are kept in a contiguous array, each entry of which is a bucket for one item. A new item is ...
  47. [47]
    [PDF] Lecture 26 — Hash Tables - CMU School of Computer Science
    Apr 24, 2012 · Open address hashing using so called linear probing has an important practical advantage over separate chaining: it causes fewer cache misses ...
  48. [48]
    Internals of sets and dicts | Fluent Python, the lizard book
    Python's dict and set are built on top of hash tables. This post explains ... Incrementing the index after a collision is called linear probing. This ...