Fact-checked by Grok 2 weeks ago

Golden-section search

The golden-section search is a numerical optimization technique for finding the minimum or maximum of a unimodal —defined as a with exactly one extremum within a bounded —by iteratively narrowing the search using ratios derived from the . This derivative-free method is particularly useful when the function's derivative is unavailable or expensive to compute, as it relies solely on evaluations to bracket and refine the location of the extremum. The algorithm begins by specifying an initial [a, b] that contains the function's extremum. Two interior points, x_1 and x_2, are then selected such that x_1 = a + (1 - c)(b - a) and x_2 = a + c(b - a), where c = \frac{\sqrt{5} - 1}{2} \approx 0.618 is the reciprocal of the \phi = \frac{1 + \sqrt{5}}{2}. The function is evaluated at these points, and depending on which value is smaller (for minimization) or larger (for maximization), the is reduced to either [a, x_2] or [x_1, b], discarding the subinterval that cannot contain the extremum due to unimodality. In subsequent iterations, one of the existing points is reused as a probe in the new , requiring only a single additional evaluation, which ensures linear by reducing the interval length by a factor of approximately c each step. The process terminates when the width falls below a predefined \epsilon. This approach draws from the golden ratio's self-similar properties, which optimize the worst-case interval reduction in sequential search procedures, as formalized in Jack Kiefer's 1953 work on strategies for locating a function's maximum. It is closely related to search methods and is widely applied in one-dimensional optimization problems, such as line searches in multidimensional algorithms, due to its simplicity and guaranteed convergence under assumptions.

Introduction

Definition and Purpose

The golden-section search is a technique for finding the minimum or maximum of a unimodal f(x) over an initial [a, b] without requiring derivatives, relying instead on successive function evaluations to progressively narrow the search containing the extremum. This method assumes the function is continuous and unimodal, meaning it possesses a single interior extremum within the , with the function values decreasing toward the extremum and increasing afterward. The primary purpose of the golden-section search is to efficiently bracket and converge to the extremum by leveraging the reciprocal of the (approximately 0.618)—to divide the interval in a way that minimizes the total number of function evaluations needed for a given . It achieves this by reducing the interval length by a constant factor related to the at each iteration, ensuring predictable and optimal convergence for one-dimensional optimization problems. At a high level, the process initiates by selecting and evaluating two interior probe points within the initial , which is assumed to bracket the extremum, after which each subsequent step eliminates a portion of the based on value comparisons, shrinking the search space to approximately 61.8% of its previous length per iteration. A key advantage is its derivative-free nature, making it particularly robust for optimizing noisy or those that are computationally expensive to evaluate, such as in simulations or black-box models where information is unavailable or unreliable.

Historical Background

The golden-section search was independently developed by Jack Kiefer in 1953 as part of a broader class of sequential search methods designed for locating extrema of unimodal functions. This technique emerged during a period of growing interest in efficient, strategies, particularly for problems where function evaluations were costly. Kiefer's foundational work formalized the approach in his paper "Sequential minimax search for a maximum," published in the Proceedings of the , where he demonstrated the optimality of using the to place probe points within an , minimizing the worst-case of after each evaluation. The method was later refined by Avriel and Douglass J. Wilde in their 1966 paper "Optimal Search for a Maximum with Sequences of Simultaneous Function Evaluations," which extended the framework to handle parallel or sequential evaluations while preserving properties under recurrent costs. The evolution of the golden-section search drew from earlier ideas involving sequences in optimization, such as those implicit in Kiefer's own framework, which approximated golden-ratio divisions through discrete Fibonacci numbers. However, it is distinguished by its reliance on fixed-ratio interval divisions based directly on the , allowing consistent efficiency regardless of the total number of iterations, unlike variable-ratio Fibonacci methods that depend on precomputed sequence lengths. Since the 1970s, the golden-section search has been widely adopted in and optimization literature, appearing as a core example in influential textbooks such as R. P. Brent's Algorithms for Minimization without Derivatives (1973), which highlights its reliability for one-dimensional problems. The core algorithm has remained fundamentally unchanged since its , though variants and improvements have been developed since 2000, including applications in and computational problems as of 2025.

Assumptions and Prerequisites

The golden-section search method relies on the function being unimodal within the search interval, meaning that for a f(x) on [a, b], it strictly decreases from a to a single interior minimum point x^* (where a < x^* < b) and then strictly increases from x^* to b. This property ensures a unique extremum without additional local minima or maxima that could mislead the search. Key prerequisites include the function f(x) being continuous and evaluable at any point within the initial finite interval [a, b], which must fully contain the extremum x^*. Unlike derivative-based methods, no gradient information is required, making the algorithm suitable for non-differentiable but continuous functions, provided that function evaluations are computationally feasible or inexpensive. These assumptions limit applicability: the method fails for multimodal functions with multiple extrema, as it may converge to a suboptimal point, and it does not handle discontinuities, which violate the continuity requirement and prevent guaranteed convergence. Convergence efficiency specifically depends on strict unimodality, as violations can lead to incorrect bracketing or stalled progress. A representative example of a unimodal function is the quadratic f(x) = x^2 on the interval [-1, 1], which decreases from x = -1 to the minimum at x = 0 and then increases to x = 1.

Mathematical Foundation

The Golden Ratio

The golden ratio, denoted by \phi, is the positive real solution to the equation \phi = 1 + \frac{1}{\phi}, which rearranges to the quadratic equation \phi^2 = \phi + 1. Solving for \phi gives \phi = \frac{1 + \sqrt{5}}{2} \approx 1.6180339887. This number is irrational, meaning it cannot be expressed as a ratio of integers, and its continued fraction expansion is the infinite periodic form [1; \overline{1}] = [1; 1, 1, 1, \dots], representing the simplest such expansion among quadratic irrationals. The golden ratio also emerges as the limit of the ratios of consecutive terms in the Fibonacci sequence, defined by F_1 = 1, F_2 = 1, and F_n = F_{n-1} + F_{n-2} for n > 2, such that \lim_{n \to \infty} \frac{F_{n+1}}{F_n} = \phi. Geometrically, \phi appears in the proportions of a regular pentagon, where the ratio of the length of a diagonal to the length of a side equals \phi. In the golden-section search method, the golden ratio's defining property enables an optimal asymmetric division of a search interval [a, b] into two parts via probe points, such that the ratio of the whole interval to the larger subinterval equals the ratio of the larger subinterval to the smaller one, both equaling \phi. This self-similar division allows one probe point from the previous iteration to be reused in the next, reducing the required number of function evaluations while maintaining efficiency. The method's efficiency stems from a contraction factor of \frac{1}{\phi} \approx 0.618 per iteration, meaning the uncertainty interval's length is reduced to approximately 61.8% of its previous size, resulting in exponential convergence toward the optimum. This reduction rate, which minimizes the worst-case number of evaluations for unimodal functions under minimax criteria, was formalized in the foundational development of sequential search procedures.

Properties of Interval Division

In the golden-section search, the interval [x_1, x_3] of length L is divided by placing two interior probe points x_2 and x_4 according to the \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618. Specifically, x_2 = x_1 + (\phi - 1)L and x_4 = x_3 - (\phi - 1)L, where \phi - 1 = \frac{1}{\phi} \approx 0.618. This positioning maintains the (x_2 - x_1) : (x_3 - x_2) = \phi : 1, ensuring the larger subinterval to the smaller subinterval is always \phi. A key efficiency property arises after evaluating the function at one of the new probe points: the search discards the subinterval that does not contain the extremum, retaining the larger subinterval as the new search domain, which has length reduced by the factor $1/\phi \approx 0.618 relative to L. Due to the symmetric placement enabled by the , the previously evaluated interior point can be reused in the subsequent iteration without recomputation, requiring only one new per step. Geometrically, the division scheme produces self-similar subdivisions, where each reduced replicates the proportional of the original, a direct consequence of the golden ratio's defining property that \phi = 1 + 1/\phi. This self-similarity optimizes the process by minimizing the worst-case number of evaluations needed to shrink the , outperforming equal-interval divisions (like ) in the sense for unimodal functions, as the retained consistently avoids smaller segments in adversarial scenarios. The method exhibits linear , with the length after n iterations given by L / \phi^n. Equivalently, to reduce the by a factor of e^{-n}, approximately n / \ln \phi \approx 2.078n evaluations are required, establishing the scale of its reliable but sub-quadratic performance.

Procedure

Initial Bracketing

In the golden-section search , the initial bracketing establishes a starting [a, b] that contains the extremum of a unimodal f(x), ensuring the search begins with a reliable of the minimum or maximum. This is selected to be sufficiently wide based on of the function's behavior or preliminary evaluations at scattered points, often starting from two distinct initial guesses x_1 and x_2 where f(x_2) \leq f(x_1), and expanding outward if necessary to capture the suspected optimum. The bracketing process involves evaluating the function at the endpoints a and b, along with an interior probe point c (typically chosen near the or via an initial step), to form a triplet of points. Adjustments are made by shifting or expanding the interval until the condition f(a) > f(c) < f(b) (for minimization) is satisfied, confirming that the function decreases from a to c and increases from c to b, thereby enclosing the extremum within [a, b]. This setup requires three points to verify the , as the middle point's lower value relative to the endpoints demonstrates the unimodal nature by indicating a local decrease followed by an increase. If the initial evaluations fail to bracket the extremum—for instance, if the function continues to decrease beyond b or the unimodality is not evident—the interval should be expanded by taking larger steps outward from the current points until a suitable triplet is found, or the unimodality assumption should be re-evaluated through additional function assessments. This phase is crucial for the algorithm's reliability, as it provides a secure enclosure before proceeding to narrower searches.

Probe Point Selection

In the golden-section search algorithm, after establishing an initial bracket [x_1, x_3] containing the minimum of a unimodal function f with an existing probe point x_2 where x_1 < x_2 < x_3, the next probe point x_4 is selected to maintain the golden ratio proportions while enabling reuse of the prior function evaluation at x_2. Specifically, x_4 is computed as x_4 = x_1 + (x_3 - x_2), ensuring that the distance from x_3 to x_2 equals the distance from x_2 to x_4 or from x_1 to x_4, depending on the configuration. The function f(x_4) is then evaluated. The positions of the probe points within the current interval [a, b] (where a = x_1 and b = x_3) are given by x_2 = b - \frac{b - a}{\phi}, \quad x_4 = a + \frac{b - a}{\phi}, with \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618 denoting the . These formulas position x_4 such that the ratio of the larger subinterval to the smaller one remains \phi, preserving the self-similar division property from the initial bracketing. Following the evaluation at x_4, the decision logic determines the new bracketing interval by comparing f(x_4) and f(x_2), assuming minimization. If f(x_2) < f(x_4), the new interval is [x_1, x_4] with probe points x_1, x_4, and the reused x_2 (now serving as the interior point), discarding the region beyond x_4. Otherwise, if f(x_2) \geq f(x_4), the new interval is [x_2, x_3] with probe points x_4 (reused in the new position), x_2, and x_3, discarding the region before x_2. This ensures the discarded point lies outside the new interval, maintaining unimodality. The key benefit of this probe point selection is its efficiency: each iteration requires only one new function evaluation, as the existing evaluation at the reused point eliminates the need for a second, while the guarantees a consistent reduction factor of approximately 0.618 in the interval length. This approach, originally formulated as a minimax strategy for sequential search, optimizes the worst-case convergence for .

Termination Criteria

The golden-section search algorithm terminates when the length of the current uncertainty interval, |x₃ - x₁|, is less than a user-defined tolerance ε, which may be specified as either an absolute or relative value. This condition ensures that the interval containing the extremum has been sufficiently narrowed for practical purposes. To promote numerical stability, especially in implementations sensitive to floating-point precision, a relative tolerance is frequently used: |x₃ - x₁| < τ (|x₁| + |x₃|), where τ ≈ √ε_mach and ε_mach denotes the machine epsilon (typically around 2.22 × 10^{-16} for double-precision arithmetic). This approach avoids premature termination due to rounding errors while adapting to the scale of the search interval. At termination, the approximate location of the extremum is estimated as the midpoint of the final interval, (x₁ + x₃)/2, providing a balanced point within the reduced uncertainty region; alternatively, the probe point with the lowest function value among the evaluated points may serve as the estimate to leverage available function information. The method guarantees that the error in the estimated extremum position is at most ε/2, half the final interval length. The required number of iterations n to achieve this precision is approximately n ≈ \log_\phi (L_0 / \epsilon), where \phi = (1 + \sqrt{5})/2 is the and L_0 is the initial interval length, reflecting the interval's geometric reduction by a factor of 1/\phi per iteration.

Implementations

Iterative Algorithm

The iterative algorithm for golden-section search implements the procedure using a loop to repeatedly narrow the search interval until a termination criterion is met, typically based on the interval width or function value difference. This approach initializes the bounding interval [a, b] and two interior probe points x₁ and x₂ using the reciprocal of the , then updates the interval based on function evaluations at these points, reusing one point in each iteration to minimize computations. The key steps begin by setting resφ = (√5 - 1)/2 ≈ 0.618, the reciprocal of the . The initial probe points are then computed as x₁ = a + (b - a) × (1 - resφ) ≈ a + 0.382(b - a) (closer to a) and x₂ = a + (b - a) × resφ ≈ a + 0.618(b - a) (closer to b), with function values f(x₁) and f(x₂) evaluated. In the loop, while the interval length |b - a| exceeds a specified tolerance ε (e.g., 10^{-6}), a new probe point x₄ is calculated symmetrically to the discarded side, and f(x₄) is evaluated. If f(x₁) < f(x₂), the minimum lies in [a, x₂], so b is set to x₂, x₂ to x₁, and x₁ updated as a + (b - a) × (1 - resφ); otherwise, the symmetric update occurs: a to x₁, x₁ to x₂, and x₂ as a + (b - a) × resφ. This reduces the interval by a factor of resφ each step. The following pseudocode outlines the iterative implementation for minimizing a unimodal function f over [a, b]:
function golden_section_iterative(f, a, b, tol=1e-6, max_iter=100):
    resphi = (sqrt(5) - 1) / 2
    x1 = a + (b - a) * (1 - resphi)
    x2 = a + (b - a) * resphi
    f1 = f(x1)
    f2 = f(x2)
    iter = 0
    while (b - a) > tol and iter < max_iter:
        if f1 < f2:
            b = x2
            x2 = x1
            f2 = f1
            x1 = a + (b - a) * (1 - resphi)
            f1 = f(x1)
        else:
            a = x1
            x1 = x2
            f1 = f2
            x2 = a + (b - a) * resphi
            f2 = f(x2)
        iter += 1
    return (a + b) / 2  # Approximate minimum location
This structure ensures only one new function evaluation per iteration after the initial two. The iterative formulation is memory-efficient, as it avoids the call stack overhead of recursive calls, preventing stack overflow for problems requiring many iterations (e.g., high precision), and is straightforward to implement in most programming languages without reliance on recursion support. To illustrate, consider minimizing f(x) = x² over [0, 2], a unimodal quadratic with minimum at x = 0. Initial length is 2. In the first iteration, x₁ ≈ 0.764, x₂ ≈ 1.236, f(x₁) ≈ 0.583 < f(x₂) ≈ 1.528, so the interval shrinks to [0, 1.236] (length ≈ 1.236). In the second iteration, new x₁ ≈ 0.472, new x₂ ≈ 0.764, f(new x₁) ≈ 0.223 < f(new x₂) ≈ 0.583, shrinking to [0, 0.764] (length ≈ 0.764). The interval reduces by ≈0.618 each step, converging toward 0.
Iterationabx₁x₂f(x₁)f(x₂)Length
0020.7641.2360.5831.5282.000
101.2360.4720.7640.2230.5831.236
200.7640.2920.4720.0850.2230.764

Recursive Algorithm

The recursive formulation of the golden-section search algorithm expresses the interval reduction process as a self-calling function that narrows the search space based on function evaluations at interior probe points, embodying a divide-and-conquer approach suitable for unimodal functions. This structure leverages the properties of the golden ratio to ensure efficient contraction of the interval while minimizing redundant evaluations through careful parameter passing. The core recursive function, typically denoted as search(a, b, f, tol), takes the current interval endpoints a and b, the objective function f, and a tolerance tol as inputs. In the base case, if the interval width |b - a| falls below tol, the algorithm terminates and returns the midpoint (a + b)/2 as an approximation of the extremum. Otherwise, it computes the probe points x1 and x2 within [a, b] using the proportions—specifically, x1 = a + (3 - \sqrt{5})/2 \cdot (b - a) and x2 = a + (\sqrt{5} - 1)/2 \cdot (b - a) (or vice versa, depending on convention)—evaluates f(x1) and f(x2), and recursively invokes itself on the reduced subinterval: if f(x1) < f(x2), it searches [a, x2]; else, it searches [x1, b]. To optimize efficiency and reuse prior evaluations, the recursive calls can pass the surviving probe point and its function value as additional parameters, avoiding recomputation of the retained interior point from the previous step; for instance, the function signature might extend to search(a, b, x_old, f_old, f, tol), where x_old and f_old carry over the reusable data, and only one new evaluation occurs per recursion level. This adaptation maintains the algorithm's hallmark of requiring just one additional function evaluation per iteration after the initial two. The recursive structure offers a modular and intuitive representation of the optimization process, facilitating comprehension of how successive interval halvings (contracted by the factor \approx 0.618) converge to the optimum, though it carries the inherent risk of stack overflow in implementations requiring deep recursion depths—typically around 45-70 levels for tolerances on the order of machine epsilon in double-precision arithmetic, which exceeds limits on some systems with shallow stack sizes.

Applications

In Unconstrained Optimization

The golden-section search serves as a primary method for one-dimensional unimodal minimization in derivative-free , where no gradient information is available or reliable. It is particularly suited for locating the minimum of a unimodal function over a bounded interval by iteratively reducing the search space using the , making it an efficient choice for scenarios requiring only function evaluations. This approach is foundational in numerical optimization for problems where the objective function is expensive to evaluate or lacks analytical derivatives. In broader unconstrained optimization frameworks, the golden-section search is commonly integrated as a subroutine for line searches, such as determining optimal step sizes along a search direction in methods like gradient descent with inexact line searches or the Nelder-Mead simplex algorithm. For instance, it approximates the minimizer of the one-dimensional function φ(α) = f(x_k + α p_k), where x_k is the current iterate and p_k is the search direction, often combined with Wolfe conditions to ensure sufficient decrease and curvature. It also plays a role in higher-dimensional derivative-free methods, including pattern search algorithms, where it performs local line minimizations to refine trial points within a pattern of directions. A key advantage of the golden-section search in practice is its robustness to noisy function evaluations, as it relies solely on comparisons rather than fitting sensitive models like quadratics, which can be misled by outliers. Compared to exhaustive grid search, it requires far fewer evaluations—scaling as O(log(1/ε)) to achieve precision ε, versus linear scaling in the resolution for grid methods—thus conserving computational resources for expensive objectives. However, the method has notable limitations in unconstrained optimization contexts. It demands approximately 2.078 log(1/ε) function evaluations to reach precision ε, rendering it relatively slow for applications requiring very high accuracy due to the logarithmic but coefficient-heavy convergence. Additionally, its reliance on the unimodal assumption restricts it to single-modality problems, making it unsuitable for constrained optimization or multimodal landscapes where multiple local minima may exist.

Engineering and Computational Uses

In engineering, the golden-section search is employed for parameter tuning in control systems, such as optimizing proportional-integral-derivative (PID) gains to achieve a unimodal response in process control. For instance, it facilitates switchable PID controller tuning by reducing model complexity through golden-section rules, enabling efficient handling of balanced, lag-dominant, and delay-dominant processes. Similarly, in antenna design, it optimizes feed positions in circular microstrip antennas to maximize performance, often compared favorably against particle swarm optimization in simulations. Computationally, golden-section search supports hyperparameter tuning in machine learning models, particularly for one-dimensional parameters like learning rates in extreme learning machines, where it reduces search time compared to exhaustive methods while maintaining accuracy. It also approximates root-finding in expensive physics simulations, such as those in finite element analysis for material stress, by minimizing objective functions derived from partial differential equations without requiring derivatives. The method is integrated into standard software libraries for practical use; SciPy's optimize.golden function implements it for scalar minimization over intervals. MATLAB's fminbnd employs golden-section search combined with parabolic interpolation for bounded univariate optimization. Additionally, it serves as a local search component in evolutionary algorithms to refine initial bounds and improve convergence in complex optimization landscapes. A representative case study involves optimizing beam width in structural engineering to minimize weight under load constraints, where golden-section search iteratively evaluates finite element simulations to identify the unimodal minimum, achieving convergence with fewer function calls than alternative one-dimensional methods. Fibonacci search is a derivative-free optimization technique for locating the minimum or maximum of a unimodal function over a bounded interval, serving as a discrete analog to the golden-section search by employing ratios derived from the to divide the search space. The method approximates the golden ratio divisions central to golden-section search, as the ratio of consecutive Fibonacci numbers F_{n+1}/F_n converges to the golden ratio \phi \approx 1.618 as n increases, enabling efficient interval reduction without directly computing irrational values. Introduced by Jack Kiefer in 1953 as a sequential minimax procedure, it ensures optimal performance in terms of the largest possible reduction in uncertainty for a fixed number of function evaluations. The procedure begins by selecting an initial interval [a, b] of length scaled to the n-th Fibonacci number F_n, where the is defined with F_1 = 1, F_2 = 1, and F_k = F_{k-1} + F_{k-2} for k > 2. Two probe points are placed within the interval: one at a of F_{n-2} \cdot (b - a) / F_n from a, and the other at F_{n-1} \cdot (b - a) / F_n from a. The function values at these points are evaluated, and the interval is reduced to either the left or right subinterval of length corresponding to F_{n-1} or F_{n-2} units, respectively, based on which subinterval is more likely to contain the extremum (determined by the assumption). This process decrements the Fibonacci index and repeats, requiring the precomputation of the sequence up to F_n, until the desired precision is achieved after a finite number of steps. A key distinction from golden-section search lies in its use of integer-based Fibonacci ratios, which allow for exact finite-step execution without involving irrational numbers like \phi, though it converges asymptotically to the same reduction rate of approximately 0.618 per iteration. This discrete nature makes it particularly suitable for scenarios requiring integer precision or where computing the is undesirable due to floating-point limitations, while still achieving equivalent in the limit of large n. Ternary search is a designed to locate the minimum or maximum of a unimodal within a bounded by repeatedly dividing the search space into three equal parts. It selects two probe points that trisect the , evaluates the at these points, and discards the outer third of the containing the less favorable values, narrowing the search to the remaining two-thirds. This method assumes the is unimodal, meaning it decreases to a minimum (or increases to a maximum) and then increases (or decreases) thereafter. The procedure begins with an initial interval [a, b]. The probe points are calculated as m_1 = a + \frac{b - a}{3} and m_2 = a + \frac{2(b - a)}{3}. The function values f(m_1) and f(m_2) are then compared: if f(m_1) < f(m_2), the new becomes [a, m_2] since the minimum must lie in the left or middle third; otherwise, the new is [m_1, b]. This process repeats iteratively until the length falls below a specified \varepsilon, at which point the minimum is approximated by the midpoint or a final . Each iteration requires two new function evaluations, making it straightforward for implementation in various programming contexts. In terms of convergence, ternary search reduces the interval length by a factor of \frac{2}{3} per iteration, leading to an interval size of (b - a) \left( \frac{2}{3} \right)^k after k steps. The number of iterations needed to achieve precision \varepsilon is approximately k = \frac{\log((b - a)/\varepsilon)}{\log(3/2)}, resulting in O(\log((b - a)/\varepsilon)) time complexity assuming constant-time function evaluations. Compared to golden-section search, which achieves a tighter reduction factor of approximately 0.618 per step while reusing one evaluation after the initial pair, ternary search converges more slowly due to its larger retention factor and fixed two evaluations per iteration. This trade-off favors ternary search for its simplicity—no irrational constants like the golden ratio are needed—but at the cost of approximately 2.4 times as many function evaluations asymptotically.

Bisection Method

The bisection method is a root-finding technique used to locate of f(x) = 0 for a f, given an initial [a, b] where f(a) and f(b) have opposite signs, ensuring at least one root exists within the interval by the . The algorithm proceeds iteratively as follows: compute the m = \frac{a + b}{2}; if f(a) \cdot f(m) < 0, update the interval to [a, m] by setting b = m; otherwise, update to [m, b] by setting a = m; repeat until the interval width falls below a specified tolerance. This halving of the at each step yields linear , with the error bound reducing by a factor of $1/2 per , guaranteeing regardless of the 's shape as long as the initial condition holds. Unlike the golden-section search, which is designed for derivative-free minimization or maximization of unimodal by comparing values at interior points to reduce the search , the focuses on locating zeros through sign changes and does not directly apply to extrema-seeking without modification. The typically requires two initial function evaluations to confirm and one evaluation per iteration thereafter, achieving faster interval reduction (factor of 0.5) compared to the golden-section search's factor of approximately 0.618, though the latter avoids needing sign-change information and is tailored for optimization rather than root-finding.

References

  1. [1]
    [PDF] Chapter 09.01 Golden Section Search Method - math for college
    The Golden Section Search method finds the maximum or minimum of a unimodal function, using three points and a fourth point between two intervals.
  2. [2]
    [PDF] Golden Section Search Method
    The Golden Section Search Method aims to find the minimum of a unimodal function using a constant reduction factor, similar to bisection, with the golden ratio ...
  3. [3]
    [PDF] 1 One-dimensional Optimization
    The golden section search method optimizes a unimodal function f by iteratively defining smaller and smaller intervals containing the unique minimizer x∗. This ...<|control11|><|separator|>
  4. [4]
    [PDF] Optimization of One-Dimensional Functions Using the Golden ...
    Sep 30, 2025 · This paper addresses this challenge by focusing on the Golden Section Search (GSS) method, a classical derivative-free technique known for its ...
  5. [5]
    [PDF] Sequential minimax search for a maximum - Semantic Scholar
    Sequential minimax search for a maximum ; Background Citations. 135 ; Methods Citations. 455 ; Results Citations. 4.
  6. [6]
    The Publications and Writings of Jack Kiefer - jstor
    [8] Sequential minimax search for a maximum. Proc. Amer. Math. Soc. 4 502-506, 1953. [MR 14. (1953) 1103, Zbl 50 (1954) ...
  7. [7]
    Optimal Search for a Maximum with Sequences of Simultaneous ...
    Mordecai Avriel, Douglass J. Wilde, (1966) Optimal Search for a Maximum with Sequences of Simultaneous Function Evaluations. Management Science 12(9):722 ...
  8. [8]
    Algorithms for minimization without derivatives - Internet Archive
    Mar 2, 2020 · Algorithms for minimization without derivatives. by: Brent, R. P. (Richard P.) Publication date: 1973. Topics: Maxima and minima, Algorithms ...
  9. [9]
    [PDF] properties of the golden ratio and fibonacci numbers
    PROPERTIES OF THE GOLDEN RATIO AND FIBONACCI NUMBERS. The Golden Ratio is defined as the solution of the positive root of the algebraic equation φ. 2. =φ+1 and ...
  10. [10]
    [PDF] Optimization - cs.Princeton
    Golden Section Search. • To assure same interval, want α = 1–α. 2. • So,. • This is the reciprocal of the “golden ratio” = ... Assignment 1 due in 1 week. • Sign ...
  11. [11]
    [PDF] 10.1 Golden Section Search in One Dimension
    Successive bracketing of a minimum. The minimum is originally bracketed by points. 1,3,2. The function is evaluated at 4, which replaces 2; then at 5, ...Missing: probe | Show results with:probe
  12. [12]
    [PDF] One-Dimensional Optimization - Rose-Hulman
    How one chooses the initial points a, b, and c to bracket the minimum is something we'll discuss in a moment. Below is an example with the function f(x)=(x − 1) ...
  13. [13]
    [PDF] 10.1 Golden Section Search in One Dimension
    A minimum, by contrast, is known to be bracketed only when there is a triplet of points, a<b<c (or c<b<a), such that f(b) is less than both f(a) and f(c). In ...Missing: probe | Show results with:probe
  14. [14]
    None
    Nothing is retrieved...<|control11|><|separator|>
  15. [15]
    Technical Note—A Stopping Criterion for the Golden-Ratio Search
    This note presents a stopping criterion for the golden-ratio search for the optimum of a one-dimensional, unimodal function. Given e, the minimum separation ...
  16. [16]
    [PDF] Golden Section Search | EMPossible
    Three of the four points line up from one iteration to the next. This means the function only has to be evaluated at one new point each iteration.
  17. [17]
    [PDF] An Algorithm for Online Optimization of Accelerators
    This is unlike the iterative golden section search or three- point parabolic interpolation methods, for which a single noisy data point may lead the search to ...
  18. [18]
    Switchable PID Controller Tuning Based on Golden Section ...
    The proposed method has been tested on several examples (balanced, lag-dominant, and a delay-dominant process) and the comparison with other tuning method based ...
  19. [19]
    Optimization of feed position of circular microstrip antenna using PSO
    The results obtained by PSO algorithm are compared by another optimization algorithm (golden section search) and very good agreement is obtained. The simulation ...
  20. [20]
    Fast Tuning of Extreme Learning Machine Neural Networks based ...
    The numerical results show that the Golden Section Algorithm dramatically reduces the network hyperparameter search time compared to the search while ...
  21. [21]
    [PDF] An optimization technique using the finite element method ... - CORE
    Many 1-D search techniques exist that can efficiently accomplish this task such as the bisection method or the golden section method (Vanderplaats, 1984).
  22. [22]
    fminbnd - Find local minimum of single-variable function on fixed ...
    fminbnd is a function file. The algorithm is based on golden section search and parabolic interpolation. Unless the left endpoint x1 is very close to the right ...Description · Examples · Input Arguments · Output Arguments
  23. [23]
    A novel local search method for LSGO with golden ratio and ...
    Aug 28, 2020 · Golden section search (GSS) is widely used as a local search method in optimization for unimodal functions. In this method, the golden ratio is ...
  24. [24]
    New One‐Dimensional Search Iteration Algorithm and Engineering ...
    The optimized design of the T-section beam is introduced for engineering application research. When the accuracy is set to 0.1, the new method needs 3 ...
  25. [25]
    Sequential Minimax Search for a Maximum - jstor
    1953] SEQUENTIAL MINIMAX SEARCH FOR A MAXIMUM 505. (6) sup L(D(f, S")) < 1/U ... Kiefer and J. Wolfowitz, Stochastic estimation of the maximum of a ...
  26. [26]
    [PDF] Numerical Optimization - UCI Mathematics
    Mathematics Subject Classification (2000): 90B30, 90C11, 90-01, 90-02. Library of Congress Control Number: 2006923897. ISBN-10: 0-387-30303-0. ISBN-13: 978-0387 ...
  27. [27]
    [PDF] MATH3016: OPTIMIZATION
    In this subsection, We continue to discuss the minimization problem (2) where f(x) is unimodal on [a, b]. In the golden search method, two function evaluations ...<|control11|><|separator|>
  28. [28]
    [PDF] CSE 417 Binary Search (pt 3) - Washington
    Jan 14, 2018 · Ternary Search. Page 19. > Binary and ternary search reduce search space by constant factor (1/2 or 2/3) on each iteration. > Any algorithm ...
  29. [29]
    [PDF] CS 240E S25
    Jun 6, 2020 · we can use ternary search to find the maximum or minimum value. The formal definition of a unimodal function is: 𝑓(𝑥) is an unimodal.
  30. [30]
    Ternary Search - Algorithms for Competitive Programming
    Mar 24, 2025 · Algorithm. Run time analysis; The case of the integer arguments; Golden section search. Implementation; Practice Problems. Newton's method for ...Missing: recursive | Show results with:recursive
  31. [31]
    [PDF] Lecture 8: Optimization Golden Section Search
    The optimization methods to be described determine a local maximum of f(x) in [a, b]. Sometimes it is known from the background of the problem that there is at ...
  32. [32]