Fact-checked by Grok 2 weeks ago

Sort-merge join

The sort-merge join is a fundamental algorithm in management systems for equi-joining two relations based on matching values in specified attributes. It proceeds in two primary phases: first, both input relations are sorted on the join attribute(s) using an external to handle datasets larger than available ; second, the sorted relations are scanned sequentially with pointers or cursors, advancing through them in a merge-like process to compare join values and output all matching pairs when equality is found. Developed as part of early research, the sort-merge join was introduced by Michael W. Blasgen and Kapali P. Eswaran in their 1977 "Storage and Access in Relational Data Bases", where it was evaluated as one of the efficient strategies for join operations in the System R prototype. This algorithm became a cornerstone of query processing alongside nested-loop and hash joins, particularly valued for its ability to leverage sorted data and produce sorted output. In terms of cost, the sort-merge join requires time proportional to the sorting of both relations plus a linear scan for the merge, typically incurring I/O costs of approximately 2(M + N) for relations of sizes M and N pages after sorting, though the full cost includes the external sorting overhead of about 2M log_F M for a relation of M pages with fanout F. It excels in scenarios where input relations are already sorted on the join attribute, the output requires sorting, or when handling inequality joins, but can suffer from high costs due to duplicate join values or skew, potentially degenerating to a quadratic scan in the worst case. Modern implementations often optimize it for parallel processing and main-memory settings, combining it with techniques like progressive merging to enable early result production during sorting. As of 2025, it remains relevant in big data systems like for distributed joins and in GPU-accelerated processing for .

Introduction

Definition and Purpose

The sort-merge join, also known as the merge join, is a classic algorithm used in relational database management systems (DBMS) to compute equi-joins between two input relations by first them on the specified join attribute(s) and then merging the sorted sequences to identify matching tuples. This process produces a result set consisting of concatenated tuples from both relations where the join condition holds, effectively generating a filtered subset of their . The algorithm assumes or enforces on the join keys, enabling a linear scan during the merge phase to output joined records while preserving the sorted order of the inputs. The primary purpose of the sort-merge join is to facilitate efficient query processing in relational databases, particularly for combining large datasets where (I/O) costs are a dominant factor, such as in disk-resident storage systems. It excels in scenarios where the relations are already partially or fully — for instance, due to clustered indexes or prior query operations— thereby avoiding redundant sorting overhead. While optimized for equi-joins (where the condition is on attributes), the algorithm can be adapted to other theta-joins by adjusting the comparison logic during the merge, though it remains most effective for equality-based conditions. This makes it a staple in DBMS implementations for operations like natural joins and lookups. At its core, the sort-merge join leverages a analogous to the combine step in , where two sorted relations R and S are traversed in parallel using pointers or cursors to advance through matching values and skip non-matches efficiently. For inputs R with m tuples and S with n tuples, the output comprises all pairs (r, s) such that r.join_attribute = s.join_attribute, delivered in sorted order to support subsequent operations like further joins or aggregations without additional sorting. Its design emphasizes patterns, which minimize seek times in traditional storage hierarchies and contribute to its robustness in environments with limited main memory.

Historical Development

The sort-merge join algorithm emerged in the 1970s as a key component of early systems, building on Edgar F. Codd's foundational 1970 , which introduced the join operation as a core primitive for combining relations based on common attributes. This conceptual framework motivated the development of efficient physical implementations, drawing inspiration from external merge-sort techniques that had been established in computing literature since the 1950s for handling large-scale data sorting on limited memory systems. The algorithm was practically realized during IBM's System R project (1974–1979), a pioneering effort led by researchers including and to prototype a relational database management system using SQL precursors. In this context, the sort-merge join was first detailed in a 1977 paper by Michael W. Blasgen and Kapali P. Eswaran, which described its use for equi-joins on sorted relations within System R's storage and access mechanisms, emphasizing its suitability for disk-based . Concurrently, the Ingres project at UC Berkeley in the mid-1970s developed its relational query processor, supporting high-level languages like QUEL and validating the approach in academic prototypes. By the , the sort-merge join evolved to support parallel and distributed environments, with adaptations for multiprocessor systems explored in David J. DeWitt's work on relational operations, enabling load-balanced merging across multiple nodes to handle growing data volumes in non-shared architectures. Entering the , it became a standard feature in commercial database management systems, such as Oracle's relational engine and the emerging (initially POSTGRES in 1986, relational by 1996), where it was optimized for sorted indexes and query plans. In the 2010s, the algorithm saw renewed adaptations for frameworks, including Hadoop's reduce-side joins and Apache Spark's default equi-join strategy, incorporating partitioning and fault-tolerant sorting to scale across clusters.

Algorithm Mechanics

Prerequisites and Setup

The sort-merge join algorithm requires that both input relations, denoted as R and S, be sorted in ascending order on the join attribute(s), typically under an equi-join condition such as R.A = S.B. If the relations are not already sorted, an initial sorting phase must precede the join operation. In terms of data structures, the algorithm employs pointers or iterators (often implemented as cursors) to traverse the sorted relations sequentially, enabling efficient linear scanning without additional indexing on the join keys. This setup assumes the relations can be accessed as sorted lists or files, with sufficient to at least one from each input during traversal. For unsorted input data, the prerequisite sorting step incurs additional computational cost, which varies based on dataset size relative to available . In-memory sorting algorithms like suffice for small relations fitting entirely in main memory, but for large datasets exceeding , external is employed to handle disk-based I/O efficiently by creating initial sorted runs and merging them in passes. This external approach ensures scalability, though it increases overall I/O operations. Edge cases, such as duplicate keys in either relation, necessitate careful handling to generate the complete of matching tuples, ensuring all valid pairs are output without omission. The algorithm presupposes that both relations fit within available storage (memory or disk), though practical implementations may partition data for very large-scale joins.

Step-by-Step Execution

The sort-merge join algorithm executes its core operations during the merge phase, assuming both input R and S are already sorted on the join attribute. Initialization begins by placing pointers (or cursors) at the first of each sorted , a single linear pass through the data. The iteration process proceeds as a sequential scan: the current join values from the heads of R and S are compared. If the value from R is less than that from S, the pointer for R advances to the next , discarding non-matching elements from R. Conversely, if the value from S is smaller, the pointer for S advances. When equality is found, all tuples in R sharing that join value are paired with all tuples in S sharing the same value, typically via nested advancement of pointers to collect and output the of matching groups before advancing both outer pointers. This handling of duplicates ensures completeness without revisiting earlier data, leveraging the sorted order. The process terminates when one relation is fully exhausted, at which point—for an inner join—no further output occurs, as any remaining tuples in the other relation cannot match. In variants supporting outer joins, such as left outer join, the remaining tuples from the non-exhausted relation are appended to the result with nulls for the missing side. Overall, this linear scan requires exactly one pass over each , achieving O(|R| + |S|) time during the merge after sorting, where |R| and |S| denote the sizes of the relations. For non-equi joins, where the condition involves inequalities (e.g., R.A \theta S.B for \theta \neq =), the merge phase incorporates additional filtering: tuples are compared against the full join predicate during the equality scan, retaining only those satisfying the inequality while advancing pointers based on the sorted order to prune non-viable candidates efficiently.

Pseudocode Representation

The sort-merge join algorithm can be formally described in pseudocode, assuming the input relations R and S are arrays of tuples sorted in ascending order on their respective join attributes (denoted as a for R and b for S). If the relations are not pre-sorted, an initial sorting phase is required, typically using an external sorting algorithm to handle large datasets that do not fit in memory. The output is a list of joined tuples satisfying the equi-join condition R.a = S.b. This representation focuses on the basic inner join variant, which can be adapted for outer joins (e.g., left or right) by modifying the output handling for non-matching tuples, though such extensions are beyond the core mechanics here. The following pseudocode illustrates the merge phase, which performs a linear scan of the sorted relations to produce the join results efficiently. It handles duplicates by collecting all matching tuples from one relation while advancing through equals in the other.
Sort R on attribute a (if not already sorted)
Sort S on attribute b (if not already sorted)

r ← first tuple of R
s ← first tuple of S
result ← empty list

while R is not exhausted and S is not exhausted do
    if r.a < s.b then
        r ← next tuple in R
    else if r.a > s.b then
        s ← next tuple in S
    else  // r.a == s.b
        temp_S ← empty list  // Collect matching tuples from S
        current_key ← r.a
        while S is not exhausted and s.b == current_key do
            append s to temp_S
            s ← next tuple in S
        end while
        
        while R is not exhausted and r.a == current_key do
            for each t in temp_S do
                append (r joined with t) to result
            end for
            r ← next tuple in R
        end while
    end if
end while

return result
This pseudocode assumes the relations are represented as sequential-access structures (e.g., sorted arrays or files), with operations like "next tuple" advancing iterators efficiently in O(1) time per step. The outer ensures exhaustion of both inputs, preventing infinite loops. The inner nested s (lines starting from "while S is not exhausted...") handle multi-way matches for duplicate join keys by buffering one side's matches in temp_S before pairing them with the other side, avoiding redundant comparisons and ensuring all Cartesian products of matching groups are output. For example, if multiple in R share the same key value, they are each joined with all corresponding in S, producing the complete set of equi-joins. The phase, if included, dominates the cost for unsorted inputs but is omitted in scenarios where relations are already sorted on the join attributes.

Performance Analysis

Time Complexity

The time complexity of the sort-merge join algorithm is dominated by the initial sorting phase, resulting in an overall complexity of O(n \log n + m \log m), where n and m are the sizes of the two input relations R and S, respectively. This assumes a standard comparison-based method, such as , applied independently to each relation on the join attribute. The sorting phase requires O(n \log n) time for relation R and O(m \log m) time for relation S, as each involves recursively dividing the data and merging sorted sublists. In contrast, the subsequent merge phase is strictly linear, taking O(n + m) time to scan and match tuples from the two sorted streams in a single pass. Several factors influence the practical performance within this asymptotic bound. In the best case, if both relations are already sorted on the join attribute, the algorithm skips sorting entirely and achieves O(n + m) time overall. In the worst case, the presence of duplicate join values increases constant factors during the merge due to generating multiple output tuples per matching group, but the asymptotic complexity remains unchanged. The total time can be expressed as: T = T_{\text{sort}} + T_{\text{merge}} where T_{\text{sort}} = O(n \log n + m \log m) and T_{\text{merge}} = n + m + d, with d representing the number of duplicate pairs output during the merge. In external memory settings, such as when relations exceed available , the incorporates I/O costs, typically a total I/O cost of approximately 3(B_R + B_S) block accesses, corresponding to two passes for the relations and one for merging—leading to O((N_R + N_S) \log_M (N_R + N_S)) I/Os in block-based models, where N_R and N_S are the page (block) counts for R and S, and M is the memory size in blocks.

Space Complexity

The space complexity of the sort-merge join algorithm primarily depends on whether the operation is performed entirely in memory or requires external storage due to limited RAM. In the in-memory case, the algorithm requires space to store the two sorted input relations of sizes n and m, resulting in O(n + m) overall space for the sorted data. During the merge phase, auxiliary space is minimal, typically O(1) for maintaining pointers or buffers to track the current positions in each sorted list. When the input relations exceed available memory, the algorithm employs , such as , to generate sorted runs that fit within . This process involves multiple passes over the data, with the number of passes being O(\log n) in the worst case, using temporary files on disk to store intermediate sorted runs. The total disk space for these temporary files is proportional to the input size, contributing O(n + m) to the overall space requirements, while RAM usage remains bounded by the memory available for buffering runs during merging. Output buffering adds an additional O(k) space, where k is the size of the join result; if k is large, this buffer may spill to disk to avoid memory overflow. Optimizations like in-place merging can reduce auxiliary space during the merge step, but the dominant cost remains O(n + m) for holding the sorted inputs and outputs. In general, the can be expressed as: \text{Space} = \text{input\_size} + \text{output\_buffer} + \text{temp\_files (for external)} where input_size is O(n + m), output_buffer is O(k), and temp_files apply only in external scenarios.

Practical Implementations

Basic Example in Pseudocode

To illustrate the sort-merge join algorithm, consider two small relations and that are already sorted on the join attribute (the first component of each tuple, representing an key). Relation consists of the tuples (1, 'A'), (2, 'B'), (3, 'A'), and relation consists of (2, 'X'), (3, 'Y'), (4, 'Z'). The goal is to perform an equi-join on the key attribute, producing output tuples that combine matching rows from and . Assuming the relations are presorted, the merge phase begins by initializing pointers to the first tuple in each relation (i=1 for R, j=1 for S). The algorithm advances the pointers based on key comparisons: if R.key < S.key, increment i; if R.key > S.key, increment j; otherwise, a match exists, and all tuples with the same key value in both relations are paired to produce output. For this example, the first comparison (1 < 2) advances i to 2. The next (2 == 2) yields the match (2, 'B') with (2, 'X'), producing output ('B', 'X') prefixed by key 2. After producing the match for key 2, i advances to 3 and j advances to 3 (the next tuple in S). The next comparison (3 == 3) matches (3, 'A') with (3, 'Y'), producing ('A', 'Y') prefixed by key 3. Finally, after the match for key 3, both pointers advance to the end. No further matches occur for key 4 in R. The following pseudocode snippet adapts the basic procedure to this example, focusing on the merge phase (sorting is omitted as the input is presorted). It uses 1-based indexing for clarity and outputs the joined tuples as (key, R_value, S_value). To correctly handle duplicates, it collects all matching tuples from both relations for the current key before producing the .
function sortMergeJoin(R, S):
    result = []
    i = 1  // pointer to R
    j = 1  // pointer to S
    while i <= length(R) and j <= length(S):
        if R[i].key < S[j].key:
            i = i + 1
        else if R[i].key > S[j].key:
            j = j + 1
        else:
            // Match found; collect all duplicates from both sides
            current_key = R[i].key
            // Collect from R
            r_group = []
            while i <= length(R) and R[i].key == current_key:
                append R[i] to r_group
                i = i + 1
            // Collect from S
            s_group = []
            while j <= length(S) and S[j].key == current_key:
                append S[j] to s_group
                j = j + 1
            // Produce cross-product
            for each r in r_group:
                for each s in s_group:
                    append (current_key, r.value, s.value) to result
    return result
Executing this on the sample data yields the joined result [(2, 'B', 'X'), (3, 'A', 'Y')], verifying the matches for keys 2 and 3 while skipping non-matches (key 1 in R and key 4 in S). This demonstrates how the algorithm efficiently scans each relation once, producing only the relevant pairs without exhaustive comparisons.

Sample Implementation in C#

A sample implementation of the sort-merge join in C# utilizes List<T> collections for the input relations, assuming they are pre-sorted by the join attribute. This approach integrates with .NET by returning an IEnumerable<JoinedTuple> for the output, enabling and compatibility with other framework components such as pipelines if desired. The code employs while loops to group and handle duplicate keys, producing the full cross-product of matching tuples for an inner join. Basic error handling is included via null checks with ArgumentNullException. This follows the standard sort-merge join procedure outlined in database query execution literature. The example uses simple tuple classes representing two relations: a left relation with a key and value (e.g., employee IDs and names) and a right relation with a key and value (e.g., department IDs and names). Sample data includes duplicates on the join key to demonstrate handling.
csharp
using System;
using System.Collections.Generic;

public class LeftTuple
{
    public int Key { get; set; }
    public string Value { get; set; }
}

public class RightTuple
{
    public int Key { get; set; }
    public string Value { get; set; }
}

public class JoinedTuple
{
    public int Key { get; set; }
    public string LeftValue { get; set; }
    public string RightValue { get; set; }
}

public static class SortMergeJoinExample
{
    public static IEnumerable<JoinedTuple> PerformJoin(List<LeftTuple> left, List<RightTuple> right)
    {
        if (left == null)
            throw new ArgumentNullException(nameof(left));
        if (right == null)
            throw new ArgumentNullException(nameof(right));

        var result = new List<JoinedTuple>();
        int i = 0;
        int j = 0;

        while (i < left.Count && j < right.Count)
        {
            int comparison = left[i].Key.CompareTo(right[j].Key);
            if (comparison < 0)
            {
                i++;
            }
            else if (comparison > 0)
            {
                j++;
            }
            else
            {
                // Keys match; collect all duplicates from left
                int currentKey = left[i].Key;
                var leftMatches = new List<LeftTuple>();
                while (i < left.Count && left[i].Key == currentKey)
                {
                    leftMatches.Add(left[i]);
                    i++;
                }

                // Collect all duplicates from right
                var rightMatches = new List<RightTuple>();
                while (j < right.Count && right[j].Key == currentKey)
                {
                    rightMatches.Add(right[j]);
                    j++;
                }

                // Produce cross-product for matches
                foreach (var l in leftMatches)
                {
                    foreach (var r in rightMatches)
                    {
                        result.Add(new JoinedTuple
                        {
                            Key = currentKey,
                            LeftValue = l.Value,
                            RightValue = r.Value
                        });
                    }
                }
            }
        }

        return result;
    }
}

class Program
{
    static void Main()
    {
        // Sample pre-sorted left relation (e.g., employees)
        var left = new List<LeftTuple>
        {
            new LeftTuple { Key = 1, Value = "Alice" },
            new LeftTuple { Key = 2, Value = "Bob" },
            new LeftTuple { Key = 2, Value = "Carol" }, // Duplicate key
            new LeftTuple { Key = 3, Value = "David" }
        };

        // Sample pre-sorted right relation (e.g., departments)
        var right = new List<RightTuple>
        {
            new RightTuple { Key = 1, Value = "HR" },
            new RightTuple { Key = 2, Value = "Engineering" },
            new RightTuple { Key = 4, Value = "Sales" }
        };

        var joined = SortMergeJoinExample.PerformJoin(left, right);

        Console.WriteLine("Joined results:");
        foreach (var join in joined)
        {
            Console.WriteLine($"Key: {join.Key}, Left: {join.LeftValue}, Right: {join.RightValue}");
        }
        // Output:
        // Key: 1, Left: Alice, Right: HR
        // Key: 2, Left: Bob, Right: Engineering
        // Key: 2, Left: Carol, Right: Engineering
    }
}

Advantages and Limitations

Key Advantages

The sort-merge join exhibits high efficiency when the input relations are already sorted on the join keys, as the merge phase requires only a single linear pass through each relation, achieving a time complexity of O(M + N) where M and N are the sizes of the relations. This makes it particularly suitable for queries involving ORDER BY clauses on join attributes, where the sorting step can be reused or avoided altogether if clustered indexes are present. A key strength of the sort-merge join is its pipelinability, allowing the merge process to stream output tuples incrementally without fully materializing the sorted inputs in memory, which reduces latency in multi-operator query plans. This streaming capability is especially beneficial in pipelined execution environments, enabling early result delivery even for large datasets. The algorithm maintains stability by preserving the relative order of tuples from the input relations in the output, leveraging the stable nature of the merge operation, which is advantageous for subsequent sorting or grouping operations that depend on consistent ordering. This order preservation ensures predictable behavior in query results without additional post-processing. Sort-merge join scales effectively to large datasets through integration with techniques, such as multi-way merges using disk-based runs, making it a staple in traditional database management systems handling terabyte-scale data. Post-sorting, the merge phase incurs low CPU overhead, primarily involving sequential comparisons and I/O operations, which aligns well with I/O-bound database environments.

Primary Limitations

The sort-merge join algorithm incurs a significant upfront due to the phase, which can dominate execution time when input relations are unsorted, making it inefficient for small datasets or scenarios where to data is feasible. This overhead arises from the need to perform external mergesort on large relations, often requiring multiple passes that increase computational expense compared to algorithms without initial . In many-to-many join scenarios with numerous duplicates on the join key, the merge phase can generate a number of output tuples in the worst case, where all tuples from both relations match, leading to time and demands proportional to the output , which can reach M × N. This occurs because the algorithm produces the of matching tuples from equal groups identified during the linear , amplifying costs when output explodes. For non-equi joins, such as or conditions, the sort-merge join requires adaptations like maintaining a of tuples and frequent backups during the merge, which introduce additional filtering overhead and reduce by necessitating rescans of portions of the inner . This contrasts with its optimized on equi-joins, as the lack of matches leads to more complex window management and potential I/O spikes. When relations exceed available memory, external sorting imposes high I/O pressure through repeated reads and writes of run files across multiple merge passes, with costs scaling as O(B_R \log_M B_R + B_S \log_M B_S) for the sorting phases, where B denotes the number of pages per and M the available pages. This memory constraint is particularly limiting for , as it demands careful allocation to minimize disk accesses during run formation and merging. The algorithm performs poorly under low selectivity, where few tuples match, because it mandates full scans and sorting of both relations regardless of output size, preventing early termination and incurring unnecessary processing for non-matching data.

Comparisons and Variants

Comparison to Nested-Loop Join

The nested-loop join is a fundamental algorithm in systems that executes a brute-force comparison of every in the outer against every in the inner to identify matches on the join , yielding a of O(|R| \times |S|), where |R| and |S| denote the cardinalities of the respective relations. This approach requires no preprocessing, making it straightforward to implement, but its quadratic scaling renders it inefficient for large relations unless one is significantly smaller or indexed. In contrast, the sort-merge join preprocesses both relations by them on the join attribute(s) before performing a linear merge pass to produce the output, achieving an overall of O((|R| + |S|) \log (|R| + |S|)). This structure positions sort-merge as superior for equi-joins on sizable, unsorted datasets, where the logarithmic sorting cost is offset by the efficient merge phase, unlike the nested-loop's exhaustive pairwise checks that escalate rapidly with input size. However, sort-merge incurs upfront overhead, which can be prohibitive if the data is already sorted or if the join involves small relations fitting in memory. Trade-offs between the two highlight their complementary roles: the nested-loop join's simplicity avoids sorting costs and supports early termination in low-selectivity scenarios, but it demands substantial I/O and memory for large outputs due to repeated full scans of the inner . Sort-merge, while more scalable, requires additional space for sorting and is less adaptable to indexed access or tiny inner relations. A blocked nested-loop variant addresses some I/O inefficiencies by partitioning the outer into memory-resident blocks and scanning the inner relation once per block, reducing passes from |R| to approximately |R| / B (where B is the block size in pages), yet it retains quadratic worst-case complexity. Overall, nested-loop suits cases with a minuscule inner relation or index availability for selective probes, whereas sort-merge dominates bulk joins on comparable large relations.

Comparison to Hash Join

The hash join algorithm, introduced in early relational database systems, operates by building a hash table on the smaller input relation (the build phase) and then probing it with tuples from the larger relation (the probe phase), enabling constant-time lookups assuming a good and minimal collisions. This results in an average of O(n + m), where n and m are the sizes of the two relations, making it particularly efficient for equi-joins on unsorted data. In contrast to the sort-merge join, which requires sorting both relations before a linear merge, the hash join avoids upfront sorting costs, performing better in scenarios with unsorted inputs, ample memory for the , and high-selectivity equi-joins where collisions are rare. However, sort-merge joins preserve the of the output and naturally support or joins by scanning sorted streams, whereas hash joins are limited to conditions and may disrupt unless post-sorting is applied. Hash joins excel in in-memory environments with balanced relation sizes and low data skew, often achieving throughputs exceeding 100 million tuples per second on modern multi-core processors, while sort-merge joins are preferable for disk-based operations, large datasets requiring sorted outputs, or when sorting can be reused across multiple query operations. On the trade-off side, hash joins risk performance degradation from hash collisions, bucket overflow, or skew in key distributions, potentially leading to quadratic behavior in worst cases, whereas sort-merge joins incur a reliable but higher initial sorting overhead of O((n + m) log(n + m)) yet provide stable, predictable performance without reliance on hash quality. Query optimizers in modern database systems, such as those in SQL Server and , often employ strategies, selecting sort-merge joins when input data is pre-sorted or when subsequent operations benefit from ordered results, while favoring joins for purely in-memory, equi-join dominated workloads to minimize overall cost.

Common Optimizations

One common optimization for the sort-merge join involves approaches that combine hashing for partitioning with subsequent and merging, particularly in distributed environments. This , often referred to as a partitioned sort-merge join, uses hash functions to divide relations into smaller buckets across nodes, enabling parallel local within each before performing the merge phase. Such partitioning reduces data and communication overhead, making it suitable for large-scale systems like , where the standard repartition join strategy partitions data via hashing followed by a sort-merge. In parallel processing contexts, sort-merge joins can be distributed across multiple nodes or cores to enhance . Techniques such as partitioning co-locate matching keys through a distributed merge, addressing skew with divide-and-conquer strategies to achieve near-linear on multiprocessor systems while minimizing inter-node transfer. If indexes exist on the join keys, the sort-merge join can exploit their inherent order to skip the explicit phase entirely, reading tuples directly in sorted sequence via scans. This optimization is particularly effective when the query planner selects scans for both inputs, avoiding the I/O and CPU costs of unsorted relations and leveraging the structure for efficient access to join candidates. For joining more than two relations (k-way joins), the sort-merge algorithm extends to a multi-way merge using priority queues, such as min-heaps, to efficiently combine multiple sorted inputs. Each heap entry holds the current smallest tuple from one input stream along with its source; extracting the global minimum repeatedly advances the corresponding stream, reducing the time complexity from O(n log k) per merge step in naive implementations to more efficient heap operations overall. This is commonly applied in external sorting extensions for joins involving multiple tables. In practical database systems like , the sort-merge join (implemented as merge join) handles duplicates during the merge phase to reduce I/O, especially when join keys have duplicates. The algorithm resets the inner relation's pointer only after processing all matching tuples for a given key. Cache-aware blocking further optimizes sort-merge joins on multi-core systems by processing data in chunks sized to fit CPU , partitioning runs to exploit locality during and merging. This involves phase-separated (e.g., in-register, in-cache, and external phases) and blocking the merge to align with cache lines, significantly reducing cache misses and improving throughput compared to naive implementations; experiments show cache-conscious variants outperforming standard sort-merge by up to 2-3x on modern hardware. Recent advancements include GPU-based sort-merge joins that leverage on graphics hardware for out-of-core data, achieving high throughput in distributed systems.

References

  1. [1]
    [PDF] Join Algorithms
    ▷ Sort both tables on the join key(s). • Phase 2: Merge. ▷ We can then use the external merge sort algorithm to join the sorted tables. ▷ Step through the ...Missing: explanation | Show results with:explanation
  2. [2]
    [PDF] External Sorting and Join Algorithms
    Sort: produce sorted runs for R and S such that there are fewer than M of them total. • Merge and join: merge the runs of R, merge the runs of S, and merge-join ...Missing: explanation | Show results with:explanation
  3. [3]
    [PDF] CMU SCS 15-721 (Spring 2017) :: Parallel Join Algorithms (Sorting)
    Apr 11, 2025 · A class of CPU instructions that allow the processor to perform the same operation on multiple data points simultaneously.Missing: explanation | Show results with:explanation
  4. [4]
    [PDF] Multi-Core, Main-Memory Joins: Sort vs. Hash Revisited
    In this paper, we explore the relative performance of radix- hash vs. sort-merge join algorithms in main-memory, multi- core settings. Our main goal is to ...
  5. [5]
    [PDF] Join Processing in Database Systems with Large Main Memories
    The standard sort-merge-join algorithm [4] begins by producing sorted runs of tuples of S. The runs are on the average (over all inputs) twice as long as the.Missing: original | Show results with:original
  6. [6]
    The relational database - IBM
    A group of programmers in 1973 undertook an industrial-strength implementation: the System R project. The team included Chamberlin and Boyce, as well as ...
  7. [7]
    The design and implementation of INGRES - ACM Digital Library
    This multiuser system gives a relational view of data, supports two high level nonprocedural data sublanguages, and runs as a collection of user processes.Abstract · Information & Contributors · Author Tags
  8. [8]
    [PDF] CMU 15-445/645 Database Systems (Fall 2024) :: Join Algorithms
    Oct 9, 2024 · WHEN IS SORT-MERGE JOIN USEFUL? One or both tables are already sorted on join key. Output must be sorted on join key. The input relations ...Missing: prerequisites | Show results with:prerequisites
  9. [9]
    Sort-Merge Join in SQL databases - Use The Index, Luke
    The sort-merge join combines two sorted lists like a zipper. Both sides of the join must be sorted by the join predicates.
  10. [10]
    [PDF] Query Evaluation Algorithms and Costs - cs.Princeton
    • Sort merge join. – sort R and S. – use merge join. • cost if not multiple pages of duplicates to join: 2*M (1 + logF-1 ( M/F ) ). + 2*N (1 + logF-1 ...
  11. [11]
    [PDF] An Evaluation of Non-Equijoin Algorithms - cs.wisc.edu
    The sort-merge equijoin algorithm begins by sorting both relations on the join at- tribute. Then, the two sorted relations are scanned, joining tuples with ...
  12. [12]
    Joins - Oracle Help Center
    In pseudocode, the high-level algorithm for sort merge might look as follows: Copy. READ data_set_1 SORT BY JOIN KEY TO temp_ds1 READ data_set_2 SORT BY JOIN ...
  13. [13]
    [PDF] A Comparison of MapReduce Join Algorithms - GitHub Pages
    A form of the sort-merge join can be expressed in pseudocode as follows: 5. Page 14. 2. Preliminaries. 1 Sort relation R by attribute a and S by attribute b. 2 ...<|control11|><|separator|>
  14. [14]
    [PDF] Database Systems - The Complete Book (2nd Edition) - ELTE
    those who want to use database systems as well as those who ...
  15. [15]
    Sort merge join ( Joins) - Algorithm Wiki
    Sort merge join ( Joins) ; Time Complexity. O ( n l o g n + m l o g m ) ; Space Complexity. O ( n + m ) ? words. (Need sorted lists of indices of input tables) ...
  16. [16]
    [PDF] Massively Parallel Sort-Merge Joins in Main Memory Multi-Core ...
    Sort-Merge-Join: An Idea Whose Time. Has(h) Passed? In ICDE, pages 406–417, 1994. [13] G. Graefe. New algorithms for join and grouping operations. Computer ...
  17. [17]
    [PDF] Access Path Selection in a Relational Database Management System
    This paper describes how System R chooses access paths for both simple (single relation) and complex que- ries (such as joins), given a user specifi- cation of ...
  18. [18]
    [PDF] Relational Algebra-Relational Calculus-SQL - NYU
    » J3 Sort-merge join: • If the records of R and S are physically sorted (ordered) by value of the join attributes A and B, respectively, we can implement ...
  19. [19]
    15-445/645 Database Systems (Spring 2025) - 12 Joins Algorithms
    At a high level, a sort-merge join sorts the two tables on their join key(s). The DBMS can use the external mergesort algorithm for this. It then steps through ...
  20. [20]
    [PDF] Progressive Merge Join: A Generic and Non-Blocking Sort-Based ...
    The sort-merge join [4] is a popular algorithm for joining two sets of data items. It first sorts both sets and then steps in a merge-like fashion through both ...
  21. [21]
    [PDF] Parallel Sort-Merge Join - Database System Implementation
    Both join algorithms are equally important. . Every serious OLAP DBMS supports both. . Sort-merge join is useful when the output needs to be sorted.Missing: original | Show results with:original
  22. [22]
    [PDF] Sort vs. Hash Revisited: Fast Join Implementation on Modern Multi ...
    Aug 28, 2009 · Later, the hash join algorithm became popular and was shown to outperform sort merge join in many sit- uations. The Grace hash join [23] and the ...
  23. [23]
    [PDF] 15-445/645 Database Systems (Spring 2024) - 11 Joins Algorithms
    4 Sort-Merge Join. At a high level, a sort-merge join sorts the two tables on their join key(s). The DBMS can use the external mergesort algorithm for this.
  24. [24]
    [PDF] Nested Loops Revisited - cs.wisc.edu
    Valduriez and Gardarin [VG84] compared parallel join and semijoin algorithms based on hashing, sort-merge, and nested loops, but did not consider nested-loops ...
  25. [25]
    [PDF] Multi-Core, Main-Memory Joins: Sort vs. Hash Revisited
    In this paper we experimentally study the performance of main-memory, parallel, multi-core join algorithms, focusing on sort-merge and (radix-)hash join. The ...Missing: history 1980s
  26. [26]
    [PDF] Lecture Notes - 11 Joins Algorithms - CMU 15-445/645
    Hash joins are almost always better than sort-based join algorithms, but there are cases in which sorting- based joins would be preferred. This includes queries ...
  27. [27]
    [PDF] A Comparison of Join Algorithms for Log Processing in MapReduce
    Jun 6, 2010 · Standard Repartition Join: This join strategy resem- bles a partitioned sort-merge join in the parallel RDBMS literature. It is also the join ...
  28. [28]
    An effective algorithm for parallelizing sort merge joins in the ...
    In this paper, we propose a parallel sort merge join algorithm which uses a divide-and-conquer approach to address the data skew problem. The proposed ...Missing: original | Show results with:original
  29. [29]
    Join During Merge: An Improved Sort Based Algorithm.
    Aug 6, 2025 · Algorithms based on sorting were generally considered blocking , since the original sort merge join ... Blasgen · Kapali P. Eswaran. Using the ...
  30. [30]
    Queries in PostgreSQL: 7. Sort and merge - Postgres Professional
    Oct 6, 2022 · The merge join algorithm takes two data sets that are sorted by the merge key and returns an ordered output.
  31. [31]
    [PDF] Multi-Core, Main-Memory Joins: Sort vs. Hash Revisited
    In this paper we experimentally study the performance of main-memory, parallel, multi-core join algorithms, focusing on sort-merge and (radix-)hash join. The ...Missing: Blasgen Eswaran