Fact-checked by Grok 2 weeks ago

Master theorem

The Master theorem is a fundamental tool in the for determining the asymptotic of divide-and-conquer recurrences of the form T(n) = a T(n/b) + f(n), where a ≥ 1 represents the number of subproblems, b > 1 is the factor by which the problem size is divided, and f(n) is an asymptotically positive function describing the cost of the divide and combine steps outside the recursive calls. This theorem provides tight Θ bounds by comparing the growth rate of f(n) to n^{log_b a}, the total work done in the recursion tree excluding the leaves. The theorem divides into three cases based on the relationship between f(n) and n^{log_b a}. In Case 1, if f(n) = O(n^{log_b a - ε}) for some constant ε > 0, the recursive calls dominate, yielding T(n) = Θ(n^{log_b a}). Case 2 applies when f(n) = Θ(n^{log_b a} log^k n) for some constant k ≥ 0, resulting in T(n) = Θ(n^{log_b a} log^{k+1} n) due to balanced contributions from recursion and non-recursive work. For Case 3, if f(n) = Ω(n^{log_b a + ε}) for some ε > 0 and the regularity condition a f(n/b) ≤ c f(n) holds for some constant c < 1 and sufficiently large n, then T(n) = Θ(f(n)), as the non-recursive work dominates. Key assumptions include that the subproblem sizes are equal and the division is exact (ignoring floors and ceilings, which do not affect asymptotics), and that f(n) grows polynomially relative to n^{log_b a}; extensions are needed for more general forms. The theorem, popularized in standard algorithms texts, simplifies analysis for algorithms like (a=2, b=2, f(n)=Θ(n), Case 2 with k=0: T(n)=Θ(n log n)) and enables quick complexity evaluation without unfolding full recursion trees. For more general recurrences with unequal subproblems, extensions like the apply.

Background

Divide-and-Conquer Algorithms

The divide-and-conquer paradigm is a fundamental algorithm design strategy in computer science, where a problem is recursively decomposed into smaller, non-overlapping subproblems of the same form, each solved independently before their solutions are merged to resolve the original problem. This approach leverages recursion to manage complexity, transforming large instances into manageable ones until reaching a base case that can be solved directly. By breaking down the problem this way, divide-and-conquer algorithms often achieve efficiencies unattainable through exhaustive or linear methods, particularly for problems exhibiting substructure where optimal solutions to subproblems contribute to the overall optimum. The origins of divide-and-conquer trace back to the mid-20th century, with John von Neumann developing the merge sort algorithm in 1945 as one of the earliest practical implementations for electronic computers. This technique gained widespread adoption in the 1960s, coinciding with advances in algorithm analysis and the need for scalable solutions in emerging fields like sorting large datasets, searching sorted structures, and optimizing combinatorial problems. Its influence extended to seminal works such as , which demonstrated how the paradigm could yield logarithmic improvements over classical methods, establishing it as a cornerstone for efficient computational strategies. In a typical divide-and-conquer algorithm, the process unfolds in three distinct phases: the divide phase, which partitions the input problem of size n into a subproblems, each of size approximately n/b (where a ≥ 1 and b > 1); the conquer phase, involving recursive application of the algorithm to solve these subproblems; and the combine phase, which merges the subproblem solutions with additional work bounded by some f(n). This structured breakdown ensures that the overall computation remains balanced, with the recursion depth and merging costs determining the algorithm's efficiency. For instance, in applications, the divide phase might split an into halves, conquer by each recursively, and combine via a linear merge. Analyzing divide-and-conquer algorithms requires asymptotic notation to characterize their time and space complexities in the limit of large inputs. Big-O (O) notation provides an upper bound on growth rate, such as O(n) describing linear-time operations like scanning a list once; Big-Omega (Ω) offers a lower bound, indicating the minimum resources needed, as in Ω(n log n) for comparison-based ; and Big-Theta (Θ) denotes a tight bound where upper and lower limits match asymptotically, ensuring precise predictions of performance. These notations abstract away constants and lower-order terms, focusing on dominant behavior to compare algorithms scalably—prerequisites for evaluating the recurrences arising from recursive calls in divide-and-conquer designs.

Recurrence Relations

In divide-and-conquer algorithms, the running time is commonly modeled using recurrence relations, where T(n) represents the total computational cost for solving a problem of input size n. For the base case, typically when n = 1, T(1) = \[Theta](/page/Theta)(1), accounting for the constant time to handle the smallest problem instance. The of such a recurrence for divide-and-conquer paradigms is T(n) = a T\left(\frac{n}{b}\right) + f(n), where a \geq 1 is the number of subproblems created at each recursive level, b > 1 is the factor by which the problem size is reduced in each subproblem, and f(n) denotes the non-recursive cost associated with dividing the problem and combining the solutions from the subproblems. This formulation assumes, for analytical simplicity, that n is a power of b, ensuring exact division without remainders; however, the results extend to arbitrary n by incorporating in the subproblem sizes, with the asymptotic behavior remaining unchanged. Solving these recurrences yields closed-form asymptotic bounds on T(n), such as \Theta(n^{\log_b a}) or similar, which are crucial for determining the and of the corresponding algorithms.

Theorem Statement

Generic Form

The Master theorem provides a method for determining the asymptotic of divide-and-conquer algorithms whose running times satisfy a specific form of . In its generic form, the theorem applies to recurrences of the form T(n) = a T\left(\frac{n}{b}\right) + f(n), where n represents the size of the input problem, T(n) denotes the running time on an input of size n, and the parameters satisfy a \geq 1 and b > 1. The parameter a specifies the number of subproblems created by dividing the original problem, each of size n/b, while b indicates the factor by which the problem size is reduced in each subproblem. The function f(n) captures the cost of the work done outside the recursive calls, such as dividing the problem and combining the solutions from the subproblems. For the theorem to apply, f(n) is an asymptotically positive function (i.e., f(n) > 0 for sufficiently large n). Under these conditions, the Master theorem establishes that the solution to the recurrence is T(n) = \Theta(g(n)) for some function g(n) determined by the relative growth rates of f(n) and n^{\log_b a}. The theorem was first introduced by Jon Louis Bentley, Haken, and James B. Saxe in their 1980 paper, where it was presented as a unifying method for analyzing divide-and-conquer recurrences. It gained widespread adoption and refinement through its inclusion in the textbook by Cormen, Leiserson, Rivest, and (1st ed., 1990).

Three Cases

The Master theorem divides the analysis of divide-and-conquer recurrences T(n) = a T(n/b) + f(n) into three mutually exclusive cases, each providing an asymptotic tight bound \Theta based on the growth rate of the non-recursive work f(n) relative to the critical exponent \log_b a. This exponent \log_b a serves as the pivotal value that compares the total work in the recursive subproblems (which scales as n^{\log_b a}) to the work done outside the recursion at each level; if f(n) grows slower than this, the recursion dominates, whereas faster growth shifts dominance to f(n). In Case 1, the condition f(n) = O(n^{\log_b a - \epsilon}) holds for some constant \epsilon > 0. Here, the non-recursive cost f(n) is asymptotically smaller than the recursive cost, so the total cost is dominated by the leaves of the recursion tree, yielding T(n) = \Theta(n^{\log_b a}). This case applies when the subproblem work overwhelms the combining step. Case 2 occurs when f(n) = \Theta(n^{\log_b a}). In this balanced scenario, the non-recursive cost matches the recursive cost per level, leading to a total cost that accumulates logarithmically over the depth of the recursion: T(n) = \Theta(n^{\log_b a} \log n). The extra logarithmic factor arises from summing identical costs across \Theta(\log n) levels. For Case 3, the condition requires f(n) = \Omega(n^{\log_b a + \epsilon}) for some \epsilon > 0, meaning f(n) grows faster than the recursive cost. Additionally, the regularity condition must hold: a f(n/b) \leq c f(n) for some constant c < 1 and sufficiently large n. This condition verifies that f(n) does not oscillate or decrease too rapidly when scaled by the subproblem size, ensuring the root-level non-recursive work dominates the entire sum without interference from deeper levels, resulting in T(n) = \Theta(f(n)). Without regularity, the bound may fail if f(n) exhibits pathological behavior that amplifies lower-level costs.

Examples

Case 1 Illustrations

Case 1 of the Master theorem applies when the non-recursive work f(n) grows polynomially slower than the recursive work, specifically when f(n) = O(n^{\log_b a - \epsilon}) for some constant \epsilon > 0, leading to a total runtime of T(n) = \Theta(n^{\log_b a}). A concrete numerical example is the recurrence T(n) = 8T(n/2) + 1000n^2, assuming T(1) is constant. Here, a = 8, b = 2, and f(n) = 1000n^2. First, compute \log_b a = \log_2 8 = 3. Then, compare f(n) to n^{\log_b a - \epsilon}: since f(n) = O(n^2) and $2 = 3 - 1, choose \epsilon = 1 > 0, satisfying the condition for Case 1. Thus, T(n) = \Theta(n^3). Another illustrative example is T(n) = 2T(n/2) + 1, where the added work is constant. With a = 2, b = 2, \log_b a = \log_2 2 = 1, and f(n) = 1 = O(n^0). Here, $0 = 1 - 1, so \epsilon = 1 > 0, confirming Case 1 applicability, and T(n) = \Theta(n^1) = \Theta(n). This recurrence might arise in a that splits the problem into two equal subproblems and performs a constant amount of combining work per level. To apply Case 1 step-by-step to a general recurrence T(n) = a T(n/b) + f(n): (1) Compute c = \log_b a; (2) Determine if there exists \epsilon > 0 such that f(n) = O(n^{c - \epsilon}), often by expressing f(n) as O(n^d) and checking d < c; if all hold, conclude T(n) = \Theta(n^c). This process highlights how the recursive contributions accumulate to dominate the total cost. Intuitively, in Case 1, the recursion tree's total cost is dominated by the leaves, where the subproblem work n^{\log_b a} at the deepest level overwhelms the shallower levels' f(n) contributions, as the root's work grows slower than the branching factor amplifies the recursion. For instance, in the first example, the n^2 work at the root is dwarfed by the $8^{\log_2 n} leaf-level costs totaling \Theta(n^3).

Case 2 Illustrations

In Case 2 of the , when f(n) = \Theta(n^{\log_b a}), the solution is T(n) = \Theta(n^{\log_b a} \log n). A representative example is the recurrence T(n) = 4T(n/2) + n^2, where a=4, b=2, and thus \log_b a = 2. Here, f(n) = n^2 = \Theta(n^2), satisfying Case 2, so T(n) = \Theta(n^2 \log n). This arises in divide-and-conquer algorithms where the combine step costs as much as the subproblem solves combined. An extension of Case 2 covers when f(n) = \Theta(n^{\log_b a} \log^k n) for k \geq 0, yielding T(n) = \Theta(n^{\log_b a} \log^{k+1} n). For instance, consider T(n) = 2T(n/2) + n \log n, with a=2, b=2, and \log_b a = 1. In this case, f(n) = n \log n = \Theta(n^1 \log^1 n), fitting the extension with k=1 and yielding T(n) = \Theta(n \log^2 n). Intuitively, Case 2 reflects balanced work across the recursion tree's \Theta(\log n) levels, where each level contributes roughly the same cost (modulo logs), leading to accumulation of logarithmic factors in the total runtime.

Case 3 Illustrations

Case 3 of the Master theorem addresses scenarios where the non-recursive cost f(n) dominates the overall runtime, leading to a solution of T(n) = Θ(f(n)), provided f(n) grows polynomially faster than n^{log_b a} and satisfies the regularity condition. A simple illustration is the recurrence T(n) = T(n/2) + n for n > 1, assuming T(1) = Θ(1). Here, a = 1, b = 2, and f(n) = n. The is log_b a = log_2 1 = 0. Since f(n) = n = Ω(n^{0 + ε}) for ε = 1 > 0, the growth condition holds. To verify regularity, check if a f(n/b) ≤ c f(n) for some c < 1 and large n: a f(n/2) = 1 \cdot (n/2) = n/2, and n/2 ≤ c n implies c ≥ 1/2; choosing c = 3/4 < 1 satisfies this for all n ≥ 1. Thus, the conditions for Case 3 are met, yielding T(n) = Θ(n). Another example is T(n) = 2 T(n/2) + n^2 for n > 1, with T(1) = Θ(1). In this case, a = 2, b = 2, and f(n) = n^2, with log_b a = log_2 2 = 1. The growth condition requires f(n) = Ω(n^{1 + ε}); taking ε = 1 gives n^2 = Ω(n^2), which holds. For regularity, compute a f(n/b) = 2 (n/2)^2 = 2 \cdot (n^2 / 4) = n^2 / 2. Then, n^2 / 2 ≤ c n^2 implies c ≥ 1/2; select c = 3/4 < 1, which works for all n ≥ 1. Therefore, T(n) = Θ(n^2). Verifying the regularity condition involves a step-by-step substitution and inequality check. Start with the general form: confirm a f(n/b) ≤ c f(n). Substitute f(n/b) directly into the inequality, simplify algebraically (e.g., for polynomial f(n) = n^k, this reduces to a / b^k ≤ c), and solve for a constant c < 1 that holds asymptotically. If f(n) is not purely polynomial, expand or bound it to ensure the inequality persists for sufficiently large n; failure here disqualifies Case 3, potentially shifting to another case or requiring exact solution methods. Intuitively, in Case 3, the recursion tree's total cost is dominated by the root level's f(n), as subsequent levels' contributions diminish geometrically due to regularity. The leaves contribute negligibly compared to the upper levels, making the sum approximate a geometric series bounded by O(f(n)), with the non-recursive work accumulating like Θ(f(n)) overall.

Applications

Sorting Algorithms

The Master theorem finds direct application in analyzing the time complexity of several classic sorting algorithms that follow divide-and-conquer paradigms. divides the input array into two halves recursively and merges the sorted halves, leading to the recurrence T(n) = 2T(n/2) + Θ(n), where the linear term captures the merging cost. This recurrence satisfies case 2 of the Master theorem, with a=2, b=2, and f(n) = Θ(n) matching Θ(n^{\log_b a}) since \log_2 2 = 1, yielding T(n) = Θ(n \log n). In heap sort, the initial build-heap phase constructs a max-heap from an unsorted array by calling heapify on each non-leaf node. A naive approximation models this phase with the recurrence T(n) = 2T(n/2) + Θ(n), similar to merge sort, which by case 2 of the also gives T(n) = Θ(n \log n). Although a tighter analysis using the varying subtree sizes shows the build phase runs in Θ(n), the Master theorem application provides an upper bound that aligns with the overall sorting complexity. For quicksort, the average-case performance assumes the pivot partitions the array into two subarrays of equal size on expectation, resulting in the recurrence T(n) = 2T(n/2) + Θ(n). Applying case 2 of the confirms T(n) = Θ(n \log n) under this balanced partitioning. Notably, quicksort's worst-case time is Θ(n^2) due to unbalanced partitions, which falls outside the Master theorem's scope for this form. The Master theorem streamlines the analysis of these sorting recurrences compared to the recursion-tree method, which requires explicitly summing costs across logarithmic levels—each contributing Θ(n)—to derive the Θ(n \log n) bound manually. By matching the recurrence to the theorem's cases, analysts obtain the asymptotic solution directly, highlighting the divide-and-conquer efficiency in achieving optimal sorting bounds.

Searching Algorithms

Binary search is a fundamental divide-and-conquer algorithm for locating an element in a sorted array by halving the search space at each step. Its time complexity follows the recurrence T(n) = T\left(\frac{n}{2}\right) + \Theta(1), where n is the input size and the constant work involves a single comparison. With parameters a=1, b=2, and f(n) = \Theta(1), the critical exponent \log_b a = 0, so f(n) = \Theta(n^{\log_b a}), placing this in case 2 of the Master theorem and yielding T(n) = \Theta(\log n). This bound reflects the algorithm's efficiency, as the number of recursive calls is proportional to the height of the implicit decision tree, which is logarithmic in n. The logarithmic complexity of binary search extends to searching in balanced binary search trees (BSTs), where self-balancing mechanisms like rotations in ensure the tree height remains O(\log n). In such structures, the search path traverses at most \log n levels, with each step performing constant-time node comparisons, mirroring the binary search recurrence and resulting in \Theta(\log n) time overall. This balance prevents degenerate cases where the tree could become linear, ensuring consistent performance for insertions, deletions, and lookups in dynamic datasets. The practical advantage is evident in applications like , where O(\log n) searches enable rapid queries on millions of records without linear scans. These applications demonstrate the Master theorem's utility in revealing why logarithmic factors emerge in balanced searching structures, enabling scalable performance as dataset sizes grow exponentially. The \Theta(\log n) outcome for binary search and tree traversals highlights the theorem's role in guiding algorithm design for efficient exploration of search spaces.

Limitations

Inadmissible Forms

The Master theorem applies only to recurrences of the form T(n) = a T(n/b) + f(n), where a \geq 1 and b > 1 are constants, subproblems are of equal size, and f(n) is asymptotically positive and polynomial-relative to n^{\log_b a}. Violations of these assumptions render the theorem inapplicable, necessitating solution methods. One common inadmissible form involves variable coefficients, where a depends on n rather than being constant. For instance, in the recurrence T(n) = 2^n T(n/2) + n, the coefficient $2^n grows exponentially with n, violating the constant a requirement. This breaks the theorem's foundational assumption that the number of subproblems remains fixed relative to problem size, preventing the standard case analysis based on \log_b a. Another inadmissible case arises when b is non-constant, leading to uneven or non-fixed subproblem divisions. Consider T(n) = 2 T(\sqrt{n}) + n, where the subproblem size \sqrt{n} implies b = \sqrt{n}, which varies with n and is not a constant greater than 1. Such forms disrupt the balanced tree assumed by the theorem, as the depth and branching no longer follow a uniform . Recurrences with exponential terms in f(n) or other non-polynomial behaviors relative to n^{\log_b a} also fall outside the theorem's scope. For example, T(n) = 2 T(n/2) + n / \log n has f(n) = n / \log n, which is not polynomially comparable to n^{\log_2 2} = n^1 because f(n) / n^{\log_b a} does not behave as \Theta(n^k) for any constant k. The theorem's cases rely on polynomial growth rates for f(n) to determine dominance, and or logarithmic irregularities invalidate this comparison. These examples illustrate failures when core assumptions like constant a and b are breached, as the theorem's proof depends on a regular, balanced recursion that allows summation over logarithmic levels. In such inadmissible scenarios, alternatives like the substitution method—guessing and verifying a form—or recursion trees—unrolling the recurrence to sum costs level by level—are recommended to derive asymptotic bounds.

Common Pitfalls

One common pitfall when applying the Master Theorem to valid recurrences is neglecting the impact of base cases and floor/ceiling functions on the analysis, despite their presence in practical implementations. Practitioners often assume the subproblem sizes must be exact powers of the division factor b, leading to concerns that floors or ceilings (e.g., in T(n) = a T(\lfloor n/b \rfloor) + f(n)) invalidate the theorem's asymptotic bounds. However, these adjustments do not alter the \Theta results, as the theorem holds asymptotically for such forms; for example, the mergesort recurrence T(n) = 2T(\lfloor n/2 \rfloor) + \Theta(n) remains \Theta(n \log n). Another frequent error involves miscomputing \log_b a, the critical exponent for case classification, often due to confusion over the base of the logarithm or incorrect use of natural logs without proper conversion. This mistake can result in selecting the wrong case among the three, yielding an inaccurate bound; for instance, treating \log_b a as \ln a instead of \frac{\ln a}{\ln b} might shift a recurrence from Case 1 to Case 2 erroneously. In Case 3, where f(n) = \Omega(n^{\log_b a + \epsilon}) for some \epsilon > 0, skipping verification of the regularity condition a f(n/b) \leq c f(n) for some c < 1 and sufficiently large n is a prevalent oversight. Without this condition, the bound T(n) = \Theta(f(n)) does not hold, even if the growth comparison suggests Case 3; a counterexample is T(n) = T(n/2) + n (2 - \cos n), where f(n) = \Omega(n^{\log_b a + \epsilon}) (with \epsilon = 1) but the condition fails for certain n = 2\pi k (odd k), preventing the \Theta(f(n)) conclusion. For Case 2 extensions, where f(n) = \Theta(n^{\log_b a} \log^k n) with k \geq 0, overlooking the increment in the logarithmic power leads to underestimating the solution as \Theta(n^{\log_b a} \log n) instead of the correct \Theta(n^{\log_b a} \log^{k+1} n). This error arises when analysts apply the basic Case 2 without recognizing higher-order log terms in f(n). To mitigate these issues, always verify the theorem's assumptions after application, such as through induction on base cases or substitution to confirm the derived bound holds despite floors, ceilings, or boundary conditions.

Extensions and Proof

Advanced Variants

The Akra–Bazzi method extends the Master theorem to handle more general divide-and-conquer recurrences of the form T(x) = g(x) + \sum_{i=1}^k a_i T(b_i x + h_i(x)), where a_i > 0, $0 < b_i < 1, g(x) is a polynomial or similar smooth function, and h_i(x) = O(x / (\log x)^2) for sufficiently large x. This generalization accommodates non-constant subproblem sizes and uneven divisions, unlike the standard Master theorem's assumption of equal splits into a subproblems each of size n/b. The solution is given by T(x) = \Theta\left( x^p \left(1 + \int_1^x \frac{g(u)}{u^{p+1}} \, du \right) \right), where p solves \sum_{i=1}^k a_i b_i^p = 1. A corrected and simplified proof of this method appears in Leighton's notes, addressing minor inaccuracies in the original formulation while preserving its applicability. A key extension within the standard Master theorem framework is the generalized Case 2, which applies when f(n) = \Theta(n^{\log_b a} (\log n)^k) for some constant k \geq 0, yielding T(n) = \Theta(n^{\log_b a} (\log n)^{k+1}). This handles logarithmic factors in the non-recursive work, broadening coverage beyond the basic k=0 scenario where T(n) = \Theta(n^{\log_b a} \log n). Further refinements in 2011 extended the theorem to multiterm recurrences and driving functions with lower-order terms, such as d(n) = \Theta(n^\alpha \log^\delta n) where \delta > -1, resulting in solutions like T(n) = \Theta(d(n) \log n) for the dominant term. These variants are particularly useful for analyzing algorithms with uneven subproblem splits, such as the deterministic median-finding algorithm in the select routine, where subproblem sizes vary (e.g., one of size $7n/10 and others smaller), or in parallel computation models with non-uniform divisions. However, both the and these Master theorem extensions assume that f(n) (or g(x)) grows polynomially, limiting their use for exponential or highly irregular non-recursive costs.

Proof Overview

The proof of the Master theorem relies on analyzing the recurrence T(n) = a T(n/b) + f(n), where a \geq 1, b > 1 are constants, and f(n) is a positive function, assuming T(1) = \Theta(1). The intuition is provided by the recursion tree, which unfolds the recurrence into a tree of depth \log_b n, with a subproblems at each level i (for i = 0 to \log_b n - 1), each contributing f(n/b^i) to the non-recursive cost, and the leaves contributing \Theta(n^{\log_b a}) in total. The total cost T(n) is thus the sum \sum_{i=0}^{\log_b n - 1} a^i f(n/b^i) + \Theta(n^{\log_b a}). This sum is evaluated using properties of geometric series, depending on the growth of f(n) relative to n^{\log_b a}. In Case 1, where f(n) = O(n^{\log_b a - \epsilon}) for some \epsilon > 0, the non-recursive costs decrease geometrically across levels. The sum \sum_{i=0}^{\log_b n - 1} a^i f(n/b^i) \leq K \sum_{i=0}^{\log_b n - 1} (n^{\log_b a - \epsilon}/b^{i\epsilon}) = K n^{\log_b a - \epsilon} \sum_{i=0}^{\log_b n - 1} (1/b^\epsilon)^i for some constant K > 0, which is a geometric series bounded by O(n^{\log_b a - \epsilon} \cdot b^\epsilon / (b^\epsilon - 1)) = o(n^{\log_b a}). Thus, the recursive costs at the leaves dominate, yielding T(n) = \Theta(n^{\log_b a}). This is formalized by induction: the base case holds for small n, and assuming it for smaller subproblems, the inductive step follows since a T(n/b) = O(n^{\log_b a - \epsilon}) and f(n) = O(n^{\log_b a - \epsilon}), so T(n) = O(n^{\log_b a}). For Case 2, where f(n) = \Theta(n^{\log_b a}), the costs per level are balanced at \Theta(n^{\log_b a}). The sum becomes \sum_{i=0}^{\log_b n - 1} a^i \cdot \Theta(n^{\log_b a} / b^{i \log_b a}) = \Theta(n^{\log_b a}) \sum_{i=0}^{\log_b n - 1} 1 = \Theta(n^{\log_b a} \log_b n), plus the leaf cost \Theta(n^{\log_b a}), so overall T(n) = \Theta(n^{\log_b a} \log n). Induction confirms this: assuming T(m) = \Theta(m^{\log_b a} \log m) for m < n, then T(n) = a \Theta((n/b)^{\log_b a} \log(n/b)) + \Theta(n^{\log_b a}) = \Theta(n^{\log_b a} (\log n - 1)) + \Theta(n^{\log_b a}) = \Theta(n^{\log_b a} \log n). In Case 3, where f(n) = \Omega(n^{\log_b a + \epsilon}) for \epsilon > 0 and f satisfies the regularity condition a f(n/b) \leq c f(n) for some c < 1 and large n, the non-recursive costs increase geometrically and dominate. The sum \sum_{i=0}^{\log_b n - 1} a^i f(n/b^i) \leq f(n) \sum_{i=0}^{\infty} c^i = f(n) / (1 - c) = O(f(n)), while a lower bound similarly yields \Omega(f(n)), so T(n) = \Theta(f(n)). The regularity ensures the geometric bound applies. Induction establishes the bounds: for the upper bound, assuming T(m) \leq k f(m) for m < n and k > 1/(1-c), then T(n) \leq a k f(n/b) + f(n) \leq k c f(n) + f(n) = k f(n); the lower bound follows analogously.

References

  1. [1]
    [PDF] 1 Solving recurrences
    The master theorem is a formula for solving recurrences of the form T(n) = aT(n/b)+f(n), where a ≥ 1 and b > 1 and f(n) is asymptotically positive. ( ...
  2. [2]
    [PDF] Master Theorem: Practice Problems and Solutions
    Master Theorem: Practice Problems and Solutions. Master Theorem. The Master Theorem applies to recurrences of the following form: T(n) = aT(n/b) + f(n) where ...
  3. [3]
    [PDF] Design and Analysis of Algorithms - UT Arlington
    The master method applies to recurrences of the form T(n) = a T(n/b) + f (n) , where a ≥ 1, b > 1, and f is asymptotically positive.
  4. [4]
    [PDF] Solving Recurrences - Jeff Erickson
    Apr 12, 2010 · The Master Theorem. The recurrence T(n) = aT(n/b) + f (n) can be solved as follows. 4 If a f (n/b) = κ f (n) for some constant κ < 1, then T ...
  5. [5]
    [PDF] ‣ master theorem ‣ integer multiplication ‣ matrix multiplication ...
    Jul 29, 2017 · Generalizes master theorem to divide-and-conquer algorithms where subproblems have substantially different sizes. Theorem. [Akra–Bazzi] ...
  6. [6]
    [PDF] Divide and Conquer
    Nov 15, 2018 · A divide and conquer algorithm has 3 steps: divide into subproblems, solve the subproblems, combine ... Divide and Conquer algorithm will fit the ...
  7. [7]
    [PDF] Divide-and-Conquer Sorting Algorithms
    Mergesort was one of the first algorithms developed for computers as we know them today. ○ It was invented by John von Neumann in 1945 (!) as a way of ...
  8. [8]
    [PDF] Asymptotic Notation: O(), o(), Ω(), ω(), and Θ() The Idea The Definitions
    Big-O (O()) is one of five standard asymptotic notations. In practice, Big-O is used as a tight upper-bound on the growth of an algorithm's effort (this ...
  9. [9]
    Introduction to Algorithms - MIT Press
    Introduction to Algorithms. fourth edition. by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein. Hardcover ...
  10. [10]
    [PDF] Solving Divide-and-Conquer Recurrences
    A divide-and-conquer algorithm consists of three steps: • dividing a problem into smaller subproblems. • solving (recursively) each subproblem.
  11. [11]
    A general method for solving divide-and-conquer recurrences
    A general method for solving divide-and-conquer recurrences ; Jon Louis Bentley ; Dorothea Haken ; James B. · Saxe.Missing: original paper
  12. [12]
    [PDF] CSC 344 – Algorithms and Complexity Recurrence Relations
    Apr 5, 2016 · Master Theorem Case 1 - Example. • T(n) = 8T(n/2) + 1000n2 a = 8, b = 2, f(n) = 1000n2. • so. T(n) ∈ Θ(nc), where c = 2. • Do we satisfy the ...
  13. [13]
    [PDF] MASTER THEOREM AND AVERAGE CASE ANALYSIS
    Oct 19, 2020 · - During each recursive call, do O(1) amount of work. - Recurrence relation: T(n)=2T(n/2)+1. - Solve it with master theorem! Page 8. - A ...
  14. [14]
    Lecture 20: Recursion Trees and the Master Method
    T(n) = 2T(n/2) + n2. The recursion tree for this recurrence has the following form: In this case, it is straightforward to sum across each row of the ...
  15. [15]
    [PDF] Chapter 5 - Portland State University
    This lone theorem tells us the running times of most of the divide-and- conquer procedures we will use. Intuition: Case 1 – recursion tree is “leaf heavy”.
  16. [16]
    [PDF] Master Method Cheat Sheet 1 Master Method - Informal Version
    Aug 25, 2014 · Case 3: If f(n) = Ω(nlogb a+ε) for some ε > 0, and af(n/b) ≤ cf(n) for some constant c < 1 and sufficiently large n, then T(n) = Θ(f(n)).
  17. [17]
    [PDF] 6.006 Introduction to Algorithms, Recitation 3 - MIT OpenCourseWare
    Solution: T (n) = T (n/2) + O(1) so T (n) = O(log n) by case 2 of Master Theorem. 2. T (n) = T (n − 1) + O(1). Solution: T (n) = O(n), length n chain, O(1) ...
  18. [18]
    [PDF] Mergesort and Recurrences
    It is possible to come up with a formula for recurrences of the form T(n) = aT(n/b) + nc (T(1) = 1). This is called the master method. – Merge-sort ⇒ T(n)=2T(n ...
  19. [19]
    [PDF] Chapter 2
    Thus, bottom-up heap construction runs in O(n) time. Bottom-up heap construction is faster than n successive insertions and speeds up the first phase of heap- ...Missing: build | Show results with:build
  20. [20]
    [PDF] Lecture 4: Quicksort
    Aug 29, 2024 · We can apply the Master theorem and get that T(n) = O(nlog n). This is unfortunately only under the assumption we always pick good pivots. 2.2 ...
  21. [21]
    [PDF] CMSC 420 Data Structures - UMD Computer Science
    AVL Trees: AVL tree's are height-balanced binary search trees. In an absolutely ideal height- balanced tree, the two children of any internal node would ...<|control11|><|separator|>
  22. [22]
    [PDF] Fast Fourier Transform - Design and Analysis of Algorithms
    The master method applies to recurrences of the form. T(n) = a T(n/b) + f (n) , where constants a ≥ 1, b > 1, and f is asymptotically positive function.
  23. [23]
    [PDF] ‣ master theorem ‣ integer multiplication ‣ matrix ... - cs.Princeton
    Feb 28, 2013 · Strassen's algorithm requires O(n2.81) arithmetic operations to multiply two n-by-n matrices. Pf. Apply case 1 of the master ...
  24. [24]
    [PDF] Master Theorem
    Master Theorem: Example 3. • Let T(n)= 3 T(n/2) + 3/4n + 1. What are the parameters? a = b = d = Therefore, which condi5on applies? 3. 2. 1. 3 > 21, case 3 ...<|control11|><|separator|>
  25. [25]
    [PDF] Lecture 2: Recurrences and Master Theorem
    Floors and ceilings. • In the master theorem, floors and ceilings within the recursive subproblem sizes do not affect the asymptotic growth of the function. • ...<|separator|>
  26. [26]
    [PDF] Master Theorem: Practice Problems and Solutions
    For each of the following recurrences, give an expression for the runtime T(n) if the recurrence can be solved with the Master Theorem. Otherwise, indicate that ...
  27. [27]
    [PDF] Notes on Better Master Theorems for Divide-and-Conquer ... - csail
    Oct 9, 1996 · In these notes, we give a simple inductive proof of the Akra-Bazzi result that is suitable for use in an undergraduate algorithms or discrete ...
  28. [28]
    [PDF] Proof of the Master Method
    (B) If f(n) = Θ(nlogb a), then T(n) = Θ(nlogb a log n). (C) If f(n) = Ω(nlogb a + ε) for some constant ε > 0, and if f satisfies the.Missing: Generalized k)