Ternary search
Ternary search is a divide-and-conquer optimization algorithm designed to locate the minimum or maximum of a unimodal function—a function that strictly increases to a single peak (or decreases to a single trough) and then strictly decreases (or increases) thereafter—over a continuous or discrete interval by repeatedly trisecting the search space and eliminating the subinterval that cannot contain the extremum.[1][2]
In the standard implementation for continuous domains, the algorithm selects two division points, typically at one-third and two-thirds of the current interval [l, r], evaluates the function at these points m_1 and m_2, and compares their values: if f(m_1) < f(m_2), the maximum lies in [m_1, r] (discarding [l, m_1]); otherwise, it lies in [l, m_2] (discarding [m_2, r]).[2] This process iterates until the interval shrinks below a specified precision threshold, such as $10^{-9}, ensuring convergence to the optimum.[2] For discrete cases, such as integer arguments, the search continues until the interval length is at most 2, after which the endpoints and midpoint are directly evaluated to identify the extremum.[2]
The algorithm's time complexity is O(\log_{3/2} ( (b-a)/\varepsilon )) iterations for a continuous interval [a, b] with error tolerance \varepsilon, assuming constant-time function evaluations, as each step reduces the search space by a factor of approximately 2/3.[1] This makes it asymptotically comparable to binary search but applicable to a broader class of functions, including those with cusps, provided unimodality holds.[1]
Ternary search finds frequent use in competitive programming and numerical optimization problems where the objective is unimodal but lacks a closed-form derivative, such as peak-finding in one-dimensional arrays or parameter tuning in algorithms.[2] It contrasts with the golden-section search, which uses irrational ratios for slightly fewer evaluations in the limit, though ternary search is simpler to implement.[2] A variant exists for sorted arrays, dividing the array into three parts to locate a target element, but this is generally less efficient than binary search due to the higher comparison cost relative to space reduction.[3]
Fundamentals
Unimodal Functions
A unimodal function is defined as a function that possesses exactly one local extremum (either a maximum or a minimum) within its domain, with the function being strictly monotonic—increasing or decreasing—on either side of this extremum.[4] Specifically, for a maximum, the function strictly increases up to the peak and strictly decreases thereafter; the case for a minimum is symmetric, with a strict decrease followed by an increase.[2] This single-mode behavior distinguishes unimodal functions from multimodal ones, which may have multiple local extrema.[5]
Key properties of unimodal functions include their behavior on closed intervals, where the extremum is unique and the function maintains strict monotonicity before and after it, often assuming continuity to ensure smooth transitions and reliable evaluation points.[4] Continuity is typically implied in optimization contexts to allow for precise interval reductions without discontinuities disrupting the monotonic segments.[2] A classic example is the quadratic function f(x) = -x^2 + 4x on the interval [0, 5], which forms an inverted parabola with a maximum at x=2, strictly increasing from $0 to $2 and strictly decreasing afterward.[6]
Unimodality is essential for search algorithms like ternary search because it guarantees that portions of the search interval can be discarded after evaluations, without the risk of overlooking the global extremum, as the single peak or valley ensures no other local optima exist.[5] Visually, a unimodal function graphs as a single prominent peak (for maxima) or valley (for minima), with the curve rising smoothly to the extremum and falling symmetrically or asymmetrically on the other side, forming a clear "hill" or "valley" shape over the domain.[4]
Search Space
In ternary search, the search space is defined as a closed interval [l, r] where l < r, and the unimodal function f is evaluated at points within this interval to locate its extremum.[2] The initial interval is chosen to encompass the domain expected to contain the extremum, ensuring the function's unimodal property holds throughout.[7]
The function is assumed to be defined and evaluable at every point in the interval, with no discontinuities that would invalidate the unimodal assumption of a single peak or trough.[8] For instance, to find the maximum of f(x) = \sin x, the search space can be initialized as [0, \pi], where l = 0 and r = \pi, leveraging the function's increase to \pi/2 followed by a decrease.[9]
Boundary conditions dictate the search's conclusion: if l = r, the extremum is precisely at that point in discrete or exact scenarios.[2] In continuous settings, the process terminates when the interval shrinks to a negligible size, such as r - l \leq \epsilon for a small tolerance \epsilon > 0, yielding an approximate solution within the desired precision.[8]
Algorithm
Recursive Implementation
The recursive implementation of ternary search applies to finding the maximum (or minimum, by negating the function) of a unimodal function over a continuous interval [left, right], leveraging the property that the function increases to a single peak and then decreases. This approach naturally translates the divide-and-conquer strategy into recursive calls, each narrowing the search space by discarding one-third of the interval based on evaluations at two interior points.
The core steps involve selecting division points m_1 = \text{left} + \frac{\text{right} - \text{left}}{3} and m_2 = \text{right} - \frac{\text{right} - \text{left}}{3} to split the interval into three roughly equal parts, then evaluating the function at these points. The decision logic proceeds as follows: if f(m_1) < f(m_2), the maximum lies in the right two-thirds, so recurse on [m_1, right]; otherwise (including f(m_1) \geq f(m_2)), the maximum lies in the left two-thirds, so recurse on [left, m_2].[2] This ensures safe discards under the unimodal assumption. For continuous domains with floating-point arithmetic, exact equality is rare, so no separate handling is typically needed.
The following pseudocode illustrates the recursive structure for finding the point of maximum (returning the x-value; modify to return f(x) if needed), assuming floating-point arithmetic:
function ternary_search(left, right, epsilon):
if right - left < epsilon:
return (left + right) / 2
m1 = left + (right - left) / 3
m2 = right - (right - left) / 3
if f(m1) < f(m2):
return ternary_search(m1, right, epsilon)
else:
return ternary_search(left, m2, epsilon)
function ternary_search(left, right, epsilon):
if right - left < epsilon:
return (left + right) / 2
m1 = left + (right - left) / 3
m2 = right - (right - left) / 3
if f(m1) < f(m2):
return ternary_search(m1, right, epsilon)
else:
return ternary_search(left, m2, epsilon)
Termination occurs when the interval width falls below a small tolerance epsilon (e.g., 1e-9), at which point the midpoint approximates the maximum location; for discrete domains, the base case checks if left >= right - 1 and evaluates directly.
Consider an example trace for the unimodal function f(x) = -x^2 + 4x (a downward parabola with maximum at x=2) over the initial interval [0, 6], using epsilon=1e-9 but showing key reductions:
- Start: [0, 6]; m1=2, m2=4; f(2)=4 > f(4)=0, so recurse on [0, 4].
- Next: [0, 4]; m1≈1.333, m2≈2.667; f(1.333)≈3.556 ≈ f(2.667)≈3.556, so recurse on [0, 2.667] (following the else branch).
- Next: [0, 2.667]; m1≈0.889, m2≈1.778; f(0.889)≈2.667 < f(1.778)≈3.951, so recurse on [0.889, 2.667].
Subsequent recursions continue narrowing around x=2 until the interval is smaller than epsilon, converging to the true maximum.[2]
Iterative Implementation
The iterative implementation of ternary search utilizes a loop-based approach to divide and narrow the search interval for a unimodal function until the desired precision is achieved. This method initializes the left and right bounds of the interval and repeatedly computes two division points within the current interval, evaluates the function at those points, and updates the bounds based on the comparison to discard the portion least likely to contain the extremum.[2]
The loop typically continues while the interval length (right - left) exceeds a small epsilon value, such as $10^{-9}, ensuring convergence to an approximate location of the minimum or maximum. The division points m_1 and m_2 are calculated as m_1 = \text{left} + \frac{\text{right} - \text{left}}{3} and m_2 = \text{right} - \frac{\text{right} - \text{left}}{3}, mirroring the points used in the recursive variant.[2]
The following pseudocode illustrates the iterative ternary search for finding the minimum of a unimodal function f:
function iterative_ternary_search(left, right, epsilon):
while right - left > epsilon:
m1 = left + (right - left) / 3
m2 = right - (right - left) / 3
if f(m1) < f(m2):
right = m2
else:
left = m1
return left // approximate minimizer; f(left) for function value
function iterative_ternary_search(left, right, epsilon):
while right - left > epsilon:
m1 = left + (right - left) / 3
m2 = right - (right - left) / 3
if f(m1) < f(m2):
right = m2
else:
left = m1
return left // approximate minimizer; f(left) for function value
[2]
In continuous domains, exact equality f(m_1) = f(m_2) is rare due to floating-point precision, so it is handled by the else branch, narrowing to [m_1, right]. This approach avoids the risk of stack overflow associated with deep recursion in the recursive implementation, making it preferable for searches over extensive intervals or when high precision requires many iterations.[2] It is particularly suitable for large search domains where recursion depth could exceed typical stack limits.[10]
To illustrate, consider finding the minimum of the unimodal function f(x) = (x - 2)^2 on the interval [0, 4], where the exact minimum is at x = 2 with f(2) = 0. Assume \epsilon = 0.1 for brevity in tracing iterations. Due to floating-point arithmetic, values are approximately equal, but we follow the code logic.
- Iteration 1: left = 0, right = 4; m_1 \approx 1.333, f(m_1) \approx 0.444; m_2 \approx 2.667, f(m_2) \approx 0.444 (approximately equal, so else: left = 1.333, right = 4). Now interval [1.333, 4], width ≈2.667.
- Iteration 2: m_1 \approx 2.222, f(m_1) \approx 0.049; m_2 \approx 3.111, f(m_2) \approx 1.234; 0.049 < 1.234, so right = 3.111. Now [1.333, 3.111], width ≈1.778.
- Iteration 3: m_1 \approx 1.889, f(m_1) \approx 0.012; m_2 \approx 2.444, f(m_2) \approx 0.198; 0.012 < 0.198, so right = 2.444. Now [1.333, 2.444], width ≈1.111.
- Iteration 4: m_1 \approx 1.556, f(m_1) \approx 0.198; m_2 \approx 1.889, f(m_2) \approx 0.012; 0.198 > 0.012, so else: left = 1.556. Now [1.556, 2.444], width ≈0.888.
- The loop continues narrowing until right - left ≤ 0.1, converging near x = 2.
This trace demonstrates the progressive reduction of the interval toward the minimizer through repeated divisions and updates.[2]
Analysis
Time Complexity
Ternary search operates by repeatedly reducing the search interval to two-thirds of its previous size in each iteration, as the algorithm divides the current interval into three equal parts and discards the one-third portion that cannot contain the optimum based on function evaluations at the division points.[7][11]
This leads to the recurrence relation for the time complexity T(n) = T\left(\frac{2n}{3}\right) + O(1), where n represents the length of the initial search interval and the O(1) term accounts for the constant-time operations, including two function evaluations per iteration.[7][11]
Solving this recurrence using the Master Theorem yields a worst-case time complexity of O(\log n), with the number of iterations specifically given by \log_{3/2} n, or approximately 1.709 \log_2 n iterations, each requiring constant time dominated by the function evaluations.[7]
In the continuous case, where the search continues until the interval length |right - left| < \epsilon for a specified precision \epsilon, the number of steps required is \log_{3/2} \left( \frac{\text{initial length}}{\epsilon} \right).[7][2]
Compared to binary search, which requires \log_2 n iterations with one function evaluation each, ternary search performs approximately 1.709 times more iterations and thus roughly 3.42 times more function evaluations in total, making it less efficient in terms of evaluation count despite the shared O(\log n) asymptotic complexity.[7]
Empirically, the two function evaluations per iteration in ternary search contrast with the single evaluation in binary search, contributing to the higher overall evaluation overhead.[2]
Space Complexity
The space complexity of ternary search varies based on the implementation approach, with the iterative form being more memory-efficient than the recursive one.
The iterative implementation achieves O(1) space complexity, relying solely on a constant number of local variables to maintain the search interval boundaries (left and right endpoints), the two division points (m1 and m2), and the corresponding function evaluations at those points. No auxiliary data structures are required beyond these bounds, and function evaluations incur O(1) space assuming black-box access to the unimodal function.[2]
In the recursive implementation, space complexity is O(\log n), arising from the call stack whose depth equals the logarithmic number of recursive invocations needed to reduce the search space.[12][13]
Asymptotically, tail recursion optimization in supportive compilers could mitigate the recursive form to O(1) space by reusing the stack frame, though this optimization is not guaranteed across all programming languages and environments.[12]
Variants and Applications
Integer Array Variant
The integer array variant of ternary search adapts the algorithm to discrete unimodal sequences, specifically to locate the peak (maximum value) in an array A of size n that strictly increases up to some index i and then strictly decreases thereafter.[2][14] This setup assumes all elements are distinct integers, ensuring a unique peak.[2]
To apply ternary search, the search interval [left, right] is divided into three roughly equal parts using integer division: m1 = left + \lfloor (right - left)/3 \rfloor and m2 = right - \lfloor (right - left)/3 \rfloor.[2][14] The values A[m1] and A[m2] are compared: if A[m1] < A[m2], the peak must lie in the right two-thirds ([m1, right]), so the search continues there; otherwise, it lies in the left two-thirds ([left, m2]).[2][14] This eliminates one-third of the array per iteration while preserving unimodality in the reduced subarray.[2]
The base case occurs when right - left \leq 2, where the maximum is found by directly evaluating the remaining elements.[14]
The following iterative pseudocode implements this variant, returning the index of the maximum:
function ternarySearch(A, left, right):
while right - left > 2:
m1 = left + (right - left) // 3
m2 = right - (right - left) // 3
if A[m1] < A[m2]:
left = m1
else:
right = m2
# Check the remaining small interval
max_index = left
for i in range(left + 1, right + 1):
if A[i] > A[max_index]:
max_index = i
return max_index
function ternarySearch(A, left, right):
while right - left > 2:
m1 = left + (right - left) // 3
m2 = right - (right - left) // 3
if A[m1] < A[m2]:
left = m1
else:
right = m2
# Check the remaining small interval
max_index = left
for i in range(left + 1, right + 1):
if A[i] > A[max_index]:
max_index = i
return max_index
[2][14]
Consider the example array A = [1, 3, 5, 4, 2], with initial interval [0, 4] and peak at index 2 (value 5).[14]
- First iteration: m1 = 1, m2 = 3; A{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} = 3 < A{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = 4, so search [1, 4].
- Second iteration: m1 = 2, m2 = 3; A{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = 5 > A{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = 4, so search [1, 3].
- The interval [1,3] has length 2, so the loop exits and the remaining elements are evaluated to find the maximum at index 2.
This reduces the search space logarithmically until the peak is isolated.[14]
Optimization Uses
Ternary search serves a crucial role in unimodal optimization by efficiently locating the maximum or minimum of a function that strictly increases to a peak and then decreases, or vice versa, within a defined interval.[2] This makes it suitable for one-dimensional problems where the objective function exhibits a single extremum, allowing the algorithm to narrow the search space iteratively until convergence.[14] In practice, it evaluates the function at two interior points to decide which subinterval to discard, reducing the interval length by approximately two-thirds per iteration.[7]
Beyond standalone use, ternary search integrates with higher-dimensional optimization as a subroutine for line searches, where it optimizes along a single direction while other methods, like coordinate descent, handle multiple variables.[15]
Real-world applications span diverse fields assuming unimodality. In signal processing, dynamic ternary search enables peak detection in Brillouin optical time-domain analysis (BOTDA) systems, where it tracks Brillouin frequency shifts in fiber optic sensors by iteratively narrowing the frequency scan range, achieving faster convergence than linear scans.[16] In game theory and artificial intelligence, ternary search can assist in optimal decision-making when payoff functions are unimodal.[17]
A key limitation of ternary search in optimization is its reliance on unimodality; without verification, multi-modal functions may lead it to local rather than global extrema, necessitating preprocessing or hybrid approaches for robustness.[2]
Comparisons
Ternary search and binary search are both divide-and-conquer algorithms, but they differ fundamentally in how they partition the search space and the types of problems they address. Binary search divides the interval into two equal parts by evaluating a single midpoint, discarding approximately half the space in each iteration based on a comparison with the target value; it is specifically designed for searching exact elements in sorted arrays or monotonic functions.[14] In contrast, ternary search partitions the interval into three roughly equal segments by evaluating two division points, discarding about one-third of the space per iteration, and is tailored for locating the maximum or minimum in unimodal functions where the objective increases to a peak (or decreases to a valley) and then reverses.[14] This distinction arises because binary search relies on the ordered, non-reversing nature of monotonic sequences, while ternary search exploits the single extremum property of unimodal ones to guide the discard decision.[18]
Efficiency-wise, binary search achieves a time complexity of O(\log_2 n) with one comparison per step, leading to approximately $1.443 \ln n total comparisons for large n. Ternary search requires O(\log_{3/2} n) steps (since it reduces the space to $2/3 each time) but two comparisons per step, resulting in about $1.820 \ln n comparisons—approximately 26% more than binary search, making it slower overall for sorted array searches despite discarding a larger fraction of the space per step.[19] This overhead stems from the additional evaluation, which outweighs the benefit of the space reduction. For instance, in a sorted array like [1, 3, 5, 7] searching for 5, binary search locates it in 2 steps: first comparing at 3 (target > 3, discard left half), then at 5 (match). On a unimodal array like [1, 3, 5, 3, 1] seeking the maximum at 5, ternary search typically requires more steps due to the two evaluations and partitioning to converge on the peak.[20]
Binary search is the preferred choice for exact discrete searches in sorted arrays, where its single-evaluation efficiency ensures optimal performance on monotonic data. Ternary search shines in scenarios requiring approximate location of continuous extrema without derivative information, such as optimization problems with unimodal objective functions. Although ternary search can be adapted to approximate binary search on monotonic cases by treating them as degenerate unimodal functions, it remains less optimal due to the extra computational cost per iteration.[14]
With Other Divide-and-Conquer Methods
Ternary search shares fundamental similarities with other divide-and-conquer methods for optimizing unimodal functions, such as golden section search and Fibonacci search. All three algorithms iteratively narrow the search interval by evaluating the function at strategically chosen points and discarding subintervals that cannot contain the extremum, rooted in the divide-and-conquer paradigm of partitioning the problem space to reduce computational effort.[2]
The golden section search, introduced by Kiefer in 1953, operates similarly to ternary search but employs asymmetric divisions based on the golden ratio \phi \approx 0.618, defined as \phi = \frac{1 + \sqrt{5}}{2}. In this method, the interval is divided such that the two probe points are placed at distances proportional to \phi and $1 - \phi from the endpoints, enabling the reuse of one function evaluation across iterations. As a result, after the initial two evaluations, each subsequent step requires only one new evaluation while reducing the interval by a factor of $1/\phi \approx 0.618, making it more efficient for continuous unimodal optimization by avoiding the recomputation of points that ternary search incurs with its two new evaluations per step.[2]
Fibonacci search functions as a discrete counterpart to golden section search, leveraging the Fibonacci sequence to select division points within integer intervals. It divides the search space using ratios derived from consecutive Fibonacci numbers, achieving a time complexity of O(\log_\phi n) akin to the golden section method, and is advantageous for discrete problems as it eschews floating-point arithmetic entirely. This approach ensures optimal interval reduction for a fixed number of evaluations in unimodal discrete settings, mirroring the efficiency of golden section search but tailored to avoid approximation errors in integer domains.
While ternary search's equal division into thirds offers conceptual simplicity, it is less optimal than golden section or Fibonacci searches, which converge faster due to their ratio-based divisions that better exploit unimodal properties. The symmetric thirds in ternary search lead to a reduction factor of $2/3 \approx 0.667 per iteration with two evaluations, whereas the asymmetric strategies in the others yield tighter bounds per evaluation, often requiring fewer total function calls for equivalent precision. For instance, in optimizing a unimodal parabolic function like f(x) = -x^2 + x over [0, 1], golden section search typically reaches a tolerance of $10^{-6} in about 40 evaluations, compared to roughly 45 for ternary search, highlighting the practical efficiency gain in continuous cases.[2]
Ternary search is often preferred over these alternatives when implementation simplicity is paramount, as it avoids the need for irrational constants like \phi in golden section search or precomputed Fibonacci tables, making it more straightforward for quick prototyping or educational purposes without sacrificing much performance in small-scale problems.[2]