Golden-section search
The golden-section search is a numerical optimization technique for finding the minimum or maximum of a unimodal function—defined as a continuous function with exactly one extremum within a bounded interval—by iteratively narrowing the search interval using ratios derived from the golden ratio.[1][2][3] This derivative-free method is particularly useful when the function's derivative is unavailable or expensive to compute, as it relies solely on function evaluations to bracket and refine the location of the extremum.[2][3]
The algorithm begins by specifying an initial interval [a, b] that contains the unimodal function's extremum.[1] Two interior points, x_1 and x_2, are then selected such that x_1 = a + (1 - c)(b - a) and x_2 = a + c(b - a), where c = \frac{\sqrt{5} - 1}{2} \approx 0.618 is the reciprocal of the golden ratio \phi = \frac{1 + \sqrt{5}}{2}.[2][3] The function is evaluated at these points, and depending on which value is smaller (for minimization) or larger (for maximization), the interval is reduced to either [a, x_2] or [x_1, b], discarding the subinterval that cannot contain the extremum due to unimodality.[1][3] In subsequent iterations, one of the existing points is reused as a probe in the new interval, requiring only a single additional function evaluation, which ensures linear convergence by reducing the interval length by a factor of approximately c each step.[2] The process terminates when the interval width falls below a predefined tolerance \epsilon.[1]
This approach draws from the golden ratio's self-similar properties, which optimize the worst-case interval reduction in sequential search procedures, as formalized in Jack Kiefer's 1953 work on minimax strategies for locating a function's maximum.[2] It is closely related to Fibonacci search methods and is widely applied in one-dimensional optimization problems, such as line searches in multidimensional algorithms, due to its simplicity and guaranteed convergence under unimodality assumptions.[2][3]
Introduction
Definition and Purpose
The golden-section search is a technique for finding the minimum or maximum of a unimodal function f(x) over an initial interval [a, b] without requiring derivatives, relying instead on successive function evaluations to progressively narrow the search interval containing the extremum.[2] This method assumes the function is continuous and unimodal, meaning it possesses a single interior extremum within the interval, with the function values decreasing toward the extremum and increasing afterward.[1]
The primary purpose of the golden-section search is to efficiently bracket and converge to the extremum by leveraging the reciprocal of the golden ratio (approximately 0.618)—to divide the interval in a way that minimizes the total number of function evaluations needed for a given precision.[2] It achieves this by reducing the interval length by a constant factor related to the golden ratio at each iteration, ensuring predictable and optimal convergence for one-dimensional optimization problems.[1]
At a high level, the process initiates by selecting and evaluating two interior probe points within the initial interval, which is assumed to bracket the extremum, after which each subsequent step eliminates a portion of the interval based on function value comparisons, shrinking the search space to approximately 61.8% of its previous length per iteration.[2] A key advantage is its derivative-free nature, making it particularly robust for optimizing noisy functions or those that are computationally expensive to evaluate, such as in engineering simulations or black-box models where gradient information is unavailable or unreliable.[4]
Historical Background
The golden-section search was independently developed by Jack Kiefer in 1953 as part of a broader class of sequential search methods designed for locating extrema of unimodal functions.[5] This technique emerged during a period of growing interest in efficient, derivative-free optimization strategies, particularly for problems where function evaluations were costly.[6]
Kiefer's foundational work formalized the approach in his paper "Sequential minimax search for a maximum," published in the Proceedings of the American Mathematical Society, where he demonstrated the optimality of using the golden ratio to place probe points within an interval, minimizing the worst-case interval of uncertainty after each evaluation.[5] The method was later refined by Mordecai Avriel and Douglass J. Wilde in their 1966 paper "Optimal Search for a Maximum with Sequences of Simultaneous Function Evaluations," which extended the framework to handle parallel or sequential evaluations while preserving minimax properties under recurrent costs.[7]
The evolution of the golden-section search drew from earlier ideas involving Fibonacci sequences in optimization, such as those implicit in Kiefer's own minimax framework, which approximated golden-ratio divisions through discrete Fibonacci numbers.[5] However, it is distinguished by its reliance on fixed-ratio interval divisions based directly on the golden ratio, allowing consistent efficiency regardless of the total number of iterations, unlike variable-ratio Fibonacci methods that depend on precomputed sequence lengths.[7]
Since the 1970s, the golden-section search has been widely adopted in numerical analysis and optimization literature, appearing as a core example in influential textbooks such as R. P. Brent's Algorithms for Minimization without Derivatives (1973), which highlights its reliability for one-dimensional problems.[8] The core algorithm has remained fundamentally unchanged since its inception, though variants and improvements have been developed since 2000, including applications in engineering and computational problems as of 2025.[8][9][4]
Assumptions and Prerequisites
The golden-section search method relies on the function being unimodal within the search interval, meaning that for a function f(x) on [a, b], it strictly decreases from a to a single interior minimum point x^* (where a < x^* < b) and then strictly increases from x^* to b.[2][4] This property ensures a unique extremum without additional local minima or maxima that could mislead the search.[1]
Key prerequisites include the function f(x) being continuous and evaluable at any point within the initial finite interval [a, b], which must fully contain the extremum x^*.[2][4] Unlike derivative-based methods, no gradient information is required, making the algorithm suitable for non-differentiable but continuous functions, provided that function evaluations are computationally feasible or inexpensive.[1]
These assumptions limit applicability: the method fails for multimodal functions with multiple extrema, as it may converge to a suboptimal point, and it does not handle discontinuities, which violate the continuity requirement and prevent guaranteed convergence.[2][4] Convergence efficiency specifically depends on strict unimodality, as violations can lead to incorrect bracketing or stalled progress.[1]
A representative example of a unimodal function is the quadratic f(x) = x^2 on the interval [-1, 1], which decreases from x = -1 to the minimum at x = 0 and then increases to x = 1.[4]
Mathematical Foundation
The Golden Ratio
The golden ratio, denoted by \phi, is the positive real solution to the equation \phi = 1 + \frac{1}{\phi}, which rearranges to the quadratic equation \phi^2 = \phi + 1. Solving for \phi gives \phi = \frac{1 + \sqrt{5}}{2} \approx 1.6180339887.[10]
This number is irrational, meaning it cannot be expressed as a ratio of integers, and its continued fraction expansion is the infinite periodic form [1; \overline{1}] = [1; 1, 1, 1, \dots], representing the simplest such expansion among quadratic irrationals. The golden ratio also emerges as the limit of the ratios of consecutive terms in the Fibonacci sequence, defined by F_1 = 1, F_2 = 1, and F_n = F_{n-1} + F_{n-2} for n > 2, such that \lim_{n \to \infty} \frac{F_{n+1}}{F_n} = \phi. Geometrically, \phi appears in the proportions of a regular pentagon, where the ratio of the length of a diagonal to the length of a side equals \phi.[10]
In the golden-section search method, the golden ratio's defining property enables an optimal asymmetric division of a search interval [a, b] into two parts via probe points, such that the ratio of the whole interval to the larger subinterval equals the ratio of the larger subinterval to the smaller one, both equaling \phi. This self-similar division allows one probe point from the previous iteration to be reused in the next, reducing the required number of function evaluations while maintaining efficiency.[11]
The method's efficiency stems from a contraction factor of \frac{1}{\phi} \approx 0.618 per iteration, meaning the uncertainty interval's length is reduced to approximately 61.8% of its previous size, resulting in exponential convergence toward the optimum. This reduction rate, which minimizes the worst-case number of evaluations for unimodal functions under minimax criteria, was formalized in the foundational development of sequential search procedures.[11]
Properties of Interval Division
In the golden-section search, the interval [x_1, x_3] of length L is divided by placing two interior probe points x_2 and x_4 according to the golden ratio \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618. Specifically, x_2 = x_1 + (\phi - 1)L and x_4 = x_3 - (\phi - 1)L, where \phi - 1 = \frac{1}{\phi} \approx 0.618. This positioning maintains the ratio (x_2 - x_1) : (x_3 - x_2) = \phi : 1, ensuring the larger subinterval to the smaller subinterval is always \phi.[1][2]
A key efficiency property arises after evaluating the function at one of the new probe points: the search discards the subinterval that does not contain the extremum, retaining the larger subinterval as the new search domain, which has length reduced by the factor $1/\phi \approx 0.618 relative to L. Due to the symmetric placement enabled by the golden ratio, the previously evaluated interior point can be reused in the subsequent iteration without recomputation, requiring only one new function evaluation per step.[1][2][12]
Geometrically, the division scheme produces self-similar subdivisions, where each reduced interval replicates the proportional structure of the original, a direct consequence of the golden ratio's defining property that \phi = 1 + 1/\phi. This self-similarity optimizes the process by minimizing the worst-case number of evaluations needed to shrink the interval, outperforming equal-interval divisions (like bisection) in the minimax sense for unimodal functions, as the retained interval consistently avoids smaller segments in adversarial scenarios.[12]
The method exhibits linear convergence, with the interval length after n iterations given by L / \phi^n. Equivalently, to reduce the interval by a factor of e^{-n}, approximately n / \ln \phi \approx 2.078n evaluations are required, establishing the scale of its reliable but sub-quadratic performance.[2][12]
Procedure
Initial Bracketing
In the golden-section search algorithm, the initial bracketing phase establishes a starting interval [a, b] that contains the extremum of a unimodal function f(x), ensuring the search begins with a reliable enclosure of the minimum or maximum. This interval is selected to be sufficiently wide based on domain knowledge of the function's behavior or preliminary evaluations at scattered points, often starting from two distinct initial guesses x_1 and x_2 where f(x_2) \leq f(x_1), and expanding outward if necessary to capture the suspected optimum.[13][12]
The bracketing process involves evaluating the function at the endpoints a and b, along with an interior probe point c (typically chosen near the midpoint or via an initial step), to form a triplet of points. Adjustments are made by shifting or expanding the interval until the condition f(a) > f(c) < f(b) (for minimization) is satisfied, confirming that the function decreases from a to c and increases from c to b, thereby enclosing the extremum within [a, b]. This setup requires three points to verify the bracketing, as the middle point's lower value relative to the endpoints demonstrates the unimodal nature by indicating a local decrease followed by an increase.[13][14]
If the initial evaluations fail to bracket the extremum—for instance, if the function continues to decrease beyond b or the unimodality is not evident—the interval should be expanded by taking larger steps outward from the current points until a suitable triplet is found, or the unimodality assumption should be re-evaluated through additional function assessments. This phase is crucial for the algorithm's reliability, as it provides a secure enclosure before proceeding to narrower searches.[12][14]
Probe Point Selection
In the golden-section search algorithm, after establishing an initial bracket [x_1, x_3] containing the minimum of a unimodal function f with an existing probe point x_2 where x_1 < x_2 < x_3, the next probe point x_4 is selected to maintain the golden ratio proportions while enabling reuse of the prior function evaluation at x_2.[2] Specifically, x_4 is computed as x_4 = x_1 + (x_3 - x_2), ensuring that the distance from x_3 to x_2 equals the distance from x_2 to x_4 or from x_1 to x_4, depending on the configuration.[15] The function f(x_4) is then evaluated.[2]
The positions of the probe points within the current interval [a, b] (where a = x_1 and b = x_3) are given by
x_2 = b - \frac{b - a}{\phi}, \quad x_4 = a + \frac{b - a}{\phi},
with \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618 denoting the golden ratio.[2] These formulas position x_4 such that the ratio of the larger subinterval to the smaller one remains \phi, preserving the self-similar division property from the initial bracketing.[15]
Following the evaluation at x_4, the decision logic determines the new bracketing interval by comparing f(x_4) and f(x_2), assuming minimization. If f(x_2) < f(x_4), the new interval is [x_1, x_4] with probe points x_1, x_4, and the reused x_2 (now serving as the interior point), discarding the region beyond x_4. Otherwise, if f(x_2) \geq f(x_4), the new interval is [x_2, x_3] with probe points x_4 (reused in the new position), x_2, and x_3, discarding the region before x_2.[2] This ensures the discarded point lies outside the new interval, maintaining unimodality.[15]
The key benefit of this probe point selection is its efficiency: each iteration requires only one new function evaluation, as the existing evaluation at the reused point eliminates the need for a second, while the golden ratio guarantees a consistent reduction factor of approximately 0.618 in the interval length.[2] This approach, originally formulated as a minimax strategy for sequential search, optimizes the worst-case convergence for unimodal functions.[15]
Termination Criteria
The golden-section search algorithm terminates when the length of the current uncertainty interval, |x₃ - x₁|, is less than a user-defined tolerance ε, which may be specified as either an absolute or relative value. This condition ensures that the interval containing the extremum has been sufficiently narrowed for practical purposes.[16]
To promote numerical stability, especially in implementations sensitive to floating-point precision, a relative tolerance is frequently used: |x₃ - x₁| < τ (|x₁| + |x₃|), where τ ≈ √ε_mach and ε_mach denotes the machine epsilon (typically around 2.22 × 10^{-16} for double-precision arithmetic). This approach avoids premature termination due to rounding errors while adapting to the scale of the search interval.[12]
At termination, the approximate location of the extremum is estimated as the midpoint of the final interval, (x₁ + x₃)/2, providing a balanced point within the reduced uncertainty region; alternatively, the probe point with the lowest function value among the evaluated points may serve as the estimate to leverage available function information.[17][1]
The method guarantees that the error in the estimated extremum position is at most ε/2, half the final interval length. The required number of iterations n to achieve this precision is approximately n ≈ \log_\phi (L_0 / \epsilon), where \phi = (1 + \sqrt{5})/2 is the golden ratio and L_0 is the initial interval length, reflecting the interval's geometric reduction by a factor of 1/\phi per iteration.[17]
Implementations
Iterative Algorithm
The iterative algorithm for golden-section search implements the procedure using a loop to repeatedly narrow the search interval until a termination criterion is met, typically based on the interval width or function value difference. This approach initializes the bounding interval [a, b] and two interior probe points x₁ and x₂ using the reciprocal of the golden ratio, then updates the interval based on function evaluations at these points, reusing one point in each iteration to minimize computations.[2][1]
The key steps begin by setting resφ = (√5 - 1)/2 ≈ 0.618, the reciprocal of the golden ratio. The initial probe points are then computed as x₁ = a + (b - a) × (1 - resφ) ≈ a + 0.382(b - a) (closer to a) and x₂ = a + (b - a) × resφ ≈ a + 0.618(b - a) (closer to b), with function values f(x₁) and f(x₂) evaluated. In the loop, while the interval length |b - a| exceeds a specified tolerance ε (e.g., 10^{-6}), a new probe point x₄ is calculated symmetrically to the discarded side, and f(x₄) is evaluated. If f(x₁) < f(x₂), the minimum lies in [a, x₂], so b is set to x₂, x₂ to x₁, and x₁ updated as a + (b - a) × (1 - resφ); otherwise, the symmetric update occurs: a to x₁, x₁ to x₂, and x₂ as a + (b - a) × resφ. This reduces the interval by a factor of resφ each step.[2][1]
The following pseudocode outlines the iterative implementation for minimizing a unimodal function f over [a, b]:
function golden_section_iterative(f, a, b, tol=1e-6, max_iter=100):
resphi = (sqrt(5) - 1) / 2
x1 = a + (b - a) * (1 - resphi)
x2 = a + (b - a) * resphi
f1 = f(x1)
f2 = f(x2)
iter = 0
while (b - a) > tol and iter < max_iter:
if f1 < f2:
b = x2
x2 = x1
f2 = f1
x1 = a + (b - a) * (1 - resphi)
f1 = f(x1)
else:
a = x1
x1 = x2
f1 = f2
x2 = a + (b - a) * resphi
f2 = f(x2)
iter += 1
return (a + b) / 2 # Approximate minimum location
function golden_section_iterative(f, a, b, tol=1e-6, max_iter=100):
resphi = (sqrt(5) - 1) / 2
x1 = a + (b - a) * (1 - resphi)
x2 = a + (b - a) * resphi
f1 = f(x1)
f2 = f(x2)
iter = 0
while (b - a) > tol and iter < max_iter:
if f1 < f2:
b = x2
x2 = x1
f2 = f1
x1 = a + (b - a) * (1 - resphi)
f1 = f(x1)
else:
a = x1
x1 = x2
f1 = f2
x2 = a + (b - a) * resphi
f2 = f(x2)
iter += 1
return (a + b) / 2 # Approximate minimum location
This structure ensures only one new function evaluation per iteration after the initial two.[2][1]
The iterative formulation is memory-efficient, as it avoids the call stack overhead of recursive calls, preventing stack overflow for problems requiring many iterations (e.g., high precision), and is straightforward to implement in most programming languages without reliance on recursion support.[2]
To illustrate, consider minimizing f(x) = x² over [0, 2], a unimodal quadratic with minimum at x = 0. Initial length is 2. In the first iteration, x₁ ≈ 0.764, x₂ ≈ 1.236, f(x₁) ≈ 0.583 < f(x₂) ≈ 1.528, so the interval shrinks to [0, 1.236] (length ≈ 1.236). In the second iteration, new x₁ ≈ 0.472, new x₂ ≈ 0.764, f(new x₁) ≈ 0.223 < f(new x₂) ≈ 0.583, shrinking to [0, 0.764] (length ≈ 0.764). The interval reduces by ≈0.618 each step, converging toward 0.[2][1]
| Iteration | a | b | x₁ | x₂ | f(x₁) | f(x₂) | Length |
|---|
| 0 | 0 | 2 | 0.764 | 1.236 | 0.583 | 1.528 | 2.000 |
| 1 | 0 | 1.236 | 0.472 | 0.764 | 0.223 | 0.583 | 1.236 |
| 2 | 0 | 0.764 | 0.292 | 0.472 | 0.085 | 0.223 | 0.764 |
Recursive Algorithm
The recursive formulation of the golden-section search algorithm expresses the interval reduction process as a self-calling function that narrows the search space based on function evaluations at interior probe points, embodying a divide-and-conquer approach suitable for unimodal functions.[2] This structure leverages the properties of the golden ratio to ensure efficient contraction of the interval while minimizing redundant evaluations through careful parameter passing.
The core recursive function, typically denoted as search(a, b, f, tol), takes the current interval endpoints a and b, the objective function f, and a tolerance tol as inputs. In the base case, if the interval width |b - a| falls below tol, the algorithm terminates and returns the midpoint (a + b)/2 as an approximation of the extremum. Otherwise, it computes the probe points x1 and x2 within [a, b] using the golden ratio proportions—specifically, x1 = a + (3 - \sqrt{5})/2 \cdot (b - a) and x2 = a + (\sqrt{5} - 1)/2 \cdot (b - a) (or vice versa, depending on convention)—evaluates f(x1) and f(x2), and recursively invokes itself on the reduced subinterval: if f(x1) < f(x2), it searches [a, x2]; else, it searches [x1, b].[2]
To optimize efficiency and reuse prior evaluations, the recursive calls can pass the surviving probe point and its function value as additional parameters, avoiding recomputation of the retained interior point from the previous step; for instance, the function signature might extend to search(a, b, x_old, f_old, f, tol), where x_old and f_old carry over the reusable data, and only one new evaluation occurs per recursion level.[2] This adaptation maintains the algorithm's hallmark of requiring just one additional function evaluation per iteration after the initial two.
The recursive structure offers a modular and intuitive representation of the optimization process, facilitating comprehension of how successive interval halvings (contracted by the factor \approx 0.618) converge to the optimum, though it carries the inherent risk of stack overflow in implementations requiring deep recursion depths—typically around 45-70 levels for tolerances on the order of machine epsilon in double-precision arithmetic, which exceeds limits on some systems with shallow stack sizes.[2]
Applications
In Unconstrained Optimization
The golden-section search serves as a primary method for one-dimensional unimodal minimization in derivative-free unconstrained optimization, where no gradient information is available or reliable. It is particularly suited for locating the minimum of a unimodal function over a bounded interval by iteratively reducing the search space using the golden ratio, making it an efficient choice for scenarios requiring only function evaluations. This approach is foundational in numerical optimization for problems where the objective function is expensive to evaluate or lacks analytical derivatives.
In broader unconstrained optimization frameworks, the golden-section search is commonly integrated as a subroutine for line searches, such as determining optimal step sizes along a search direction in methods like gradient descent with inexact line searches or the Nelder-Mead simplex algorithm. For instance, it approximates the minimizer of the one-dimensional function φ(α) = f(x_k + α p_k), where x_k is the current iterate and p_k is the search direction, often combined with Wolfe conditions to ensure sufficient decrease and curvature. It also plays a role in higher-dimensional derivative-free methods, including pattern search algorithms, where it performs local line minimizations to refine trial points within a pattern of directions.
A key advantage of the golden-section search in practice is its robustness to noisy function evaluations, as it relies solely on comparisons rather than fitting sensitive models like quadratics, which can be misled by outliers. Compared to exhaustive grid search, it requires far fewer evaluations—scaling as O(log(1/ε)) to achieve precision ε, versus linear scaling in the resolution for grid methods—thus conserving computational resources for expensive objectives.[18]
However, the method has notable limitations in unconstrained optimization contexts. It demands approximately 2.078 log(1/ε) function evaluations to reach precision ε, rendering it relatively slow for applications requiring very high accuracy due to the logarithmic but coefficient-heavy convergence. Additionally, its reliance on the unimodal assumption restricts it to single-modality problems, making it unsuitable for constrained optimization or multimodal landscapes where multiple local minima may exist.
Engineering and Computational Uses
In engineering, the golden-section search is employed for parameter tuning in control systems, such as optimizing proportional-integral-derivative (PID) gains to achieve a unimodal response in process control. For instance, it facilitates switchable PID controller tuning by reducing model complexity through golden-section rules, enabling efficient handling of balanced, lag-dominant, and delay-dominant processes.[19] Similarly, in antenna design, it optimizes feed positions in circular microstrip antennas to maximize performance, often compared favorably against particle swarm optimization in simulations.[20]
Computationally, golden-section search supports hyperparameter tuning in machine learning models, particularly for one-dimensional parameters like learning rates in extreme learning machines, where it reduces search time compared to exhaustive methods while maintaining accuracy.[21] It also approximates root-finding in expensive physics simulations, such as those in finite element analysis for material stress, by minimizing objective functions derived from partial differential equations without requiring derivatives.[22]
The method is integrated into standard software libraries for practical use; SciPy's optimize.golden function implements it for scalar minimization over intervals. MATLAB's fminbnd employs golden-section search combined with parabolic interpolation for bounded univariate optimization.[23] Additionally, it serves as a local search component in evolutionary algorithms to refine initial bounds and improve convergence in complex optimization landscapes.[24]
A representative case study involves optimizing beam width in structural engineering to minimize weight under load constraints, where golden-section search iteratively evaluates finite element simulations to identify the unimodal minimum, achieving convergence with fewer function calls than alternative one-dimensional methods.[9]
Fibonacci Search
Fibonacci search is a derivative-free optimization technique for locating the minimum or maximum of a unimodal function over a bounded interval, serving as a discrete analog to the golden-section search by employing ratios derived from the Fibonacci sequence to divide the search space.[25] The method approximates the golden ratio divisions central to golden-section search, as the ratio of consecutive Fibonacci numbers F_{n+1}/F_n converges to the golden ratio \phi \approx 1.618 as n increases, enabling efficient interval reduction without directly computing irrational values.[26] Introduced by Jack Kiefer in 1953 as a sequential minimax procedure, it ensures optimal performance in terms of the largest possible reduction in uncertainty for a fixed number of function evaluations.
The procedure begins by selecting an initial interval [a, b] of length scaled to the n-th Fibonacci number F_n, where the Fibonacci sequence is defined with F_1 = 1, F_2 = 1, and F_k = F_{k-1} + F_{k-2} for k > 2. Two probe points are placed within the interval: one at a distance of F_{n-2} \cdot (b - a) / F_n from a, and the other at F_{n-1} \cdot (b - a) / F_n from a. The function values at these points are evaluated, and the interval is reduced to either the left or right subinterval of length corresponding to F_{n-1} or F_{n-2} units, respectively, based on which subinterval is more likely to contain the extremum (determined by the unimodality assumption). This process decrements the Fibonacci index and repeats, requiring the precomputation of the Fibonacci sequence up to F_n, until the desired precision is achieved after a finite number of steps.[27][26]
A key distinction from golden-section search lies in its use of integer-based Fibonacci ratios, which allow for exact finite-step execution without involving irrational numbers like \phi, though it converges asymptotically to the same reduction rate of approximately 0.618 per iteration.[26] This discrete nature makes it particularly suitable for scenarios requiring integer precision or where computing the golden ratio is undesirable due to floating-point limitations, while still achieving equivalent efficiency in the limit of large n.[27]
Ternary Search
Ternary search is a divide-and-conquer algorithm designed to locate the minimum or maximum of a unimodal function within a bounded interval by repeatedly dividing the search space into three equal parts.[28] It selects two probe points that trisect the interval, evaluates the function at these points, and discards the outer third of the interval containing the less favorable values, narrowing the search to the remaining two-thirds.[29] This method assumes the function is unimodal, meaning it decreases to a minimum (or increases to a maximum) and then increases (or decreases) thereafter.
The procedure begins with an initial interval [a, b]. The probe points are calculated as m_1 = a + \frac{b - a}{3} and m_2 = a + \frac{2(b - a)}{3}. The function values f(m_1) and f(m_2) are then compared: if f(m_1) < f(m_2), the new interval becomes [a, m_2] since the minimum must lie in the left or middle third; otherwise, the new interval is [m_1, b].[28] This process repeats iteratively until the interval length falls below a specified tolerance \varepsilon, at which point the minimum is approximated by the midpoint or a final evaluation.[29] Each iteration requires two new function evaluations, making it straightforward for implementation in various programming contexts.
In terms of convergence, ternary search reduces the interval length by a factor of \frac{2}{3} per iteration, leading to an interval size of (b - a) \left( \frac{2}{3} \right)^k after k steps.[28] The number of iterations needed to achieve precision \varepsilon is approximately k = \frac{\log((b - a)/\varepsilon)}{\log(3/2)}, resulting in O(\log((b - a)/\varepsilon)) time complexity assuming constant-time function evaluations.[28] Compared to golden-section search, which achieves a tighter reduction factor of approximately 0.618 per step while reusing one evaluation after the initial pair, ternary search converges more slowly due to its larger retention factor and fixed two evaluations per iteration.[30] This trade-off favors ternary search for its simplicity—no irrational constants like the golden ratio are needed—but at the cost of approximately 2.4 times as many function evaluations asymptotically.[30]
Bisection Method
The bisection method is a bracketing root-finding technique used to locate roots of the equation f(x) = 0 for a continuous function f, given an initial interval [a, b] where f(a) and f(b) have opposite signs, ensuring at least one root exists within the interval by the intermediate value theorem.
The algorithm proceeds iteratively as follows: compute the midpoint m = \frac{a + b}{2}; if f(a) \cdot f(m) < 0, update the interval to [a, m] by setting b = m; otherwise, update to [m, b] by setting a = m; repeat until the interval width falls below a specified tolerance.
This halving of the interval at each step yields linear convergence, with the error bound reducing by a factor of $1/2 per iteration, guaranteeing convergence regardless of the function's shape as long as the initial bracketing condition holds.
Unlike the golden-section search, which is designed for derivative-free minimization or maximization of unimodal functions by comparing function values at interior points to reduce the search interval, the bisection method focuses on locating zeros through sign changes and does not directly apply to extrema-seeking without modification.[31]
The bisection method typically requires two initial function evaluations to confirm bracketing and one evaluation per iteration thereafter, achieving faster interval reduction (factor of 0.5) compared to the golden-section search's factor of approximately 0.618, though the latter avoids needing sign-change information and is tailored for optimization rather than root-finding.[32]