Fact-checked by Grok 2 weeks ago

Brent's method

Brent's method is a robust root-finding designed to locate a zero of a continuous univariate f(x) within an initial bracketing interval [a, b] where f(a) and f(b) have opposite signs, ensuring the existence of at least one root by the . It hybridizes the reliability of the with the efficiency of interpolation-based approaches, specifically the for and inverse quadratic interpolation using the most recent three evaluations, to generate trial points while maintaining a bracket around the root. Developed by Richard P. Brent, the method guarantees monotonic convergence to a root under mild continuity assumptions, even in the presence of errors, and requires only evaluations without derivatives. Originally published in 1971 as an algorithm with guaranteed convergence for finding function zeros, Brent's method builds on earlier work like Dekker's method by incorporating safeguards against interpolation failures, such as reverting to bisection when a proposed step would violate the bracketing condition. Brent later expanded on related techniques in his 1973 book Algorithms for Minimization Without Derivatives, where the root-finding procedure serves as a foundation for derivative-free optimization. The algorithm's design emphasizes practicality: it performs at most one function evaluation per iteration and dynamically selects between interpolation types based on available points and tolerance criteria, typically achieving superlinear convergence rates faster than pure bisection while avoiding the instability of standalone secant or quadratic methods. In practice, Brent's method is widely implemented in numerical libraries, such as SciPy's brentq function and the GNU Scientific Library, due to its balance of speed and reliability for one-dimensional problems in scientific computing. It excels in scenarios where the function may be noisy or expensive to evaluate, as the ensures progress toward the regardless of accuracy. Related techniques, including Brent's separate derivative-free minimization using parabolic , extend its principles to optimization contexts.

Background and Prerequisites

Root-Finding Overview

The root-finding problem in involves identifying a point x^* such that f(x^*) = [0](/page/0) for a given continuous univariate f: [\mathbb{R}](/page/R) \to [\mathbb{R}](/page/R). This task is fundamental in solving nonlinear equations arising and , where the goal is to approximate the to a desired . When f(a) and f(b) exhibit opposite signs for a < b, the Intermediate Value Theorem ensures the existence of at least one root in the interval (a, b), providing a foundational guarantee for the presence of a zero within a bracketed domain. This bracketing assumption leverages the continuity of f to confirm a sign change, enabling reliable localization of roots without prior knowledge of the function's behavior elsewhere. Bracketing methods, which start from an initial interval with a sign change and iteratively shrink it while maintaining the bracket, guarantee convergence to a root, unlike open methods such as Newton-Raphson that can diverge if the initial approximation is poor. These approaches are especially suitable for black-box functions, where only function evaluations are available and derivatives cannot be computed or provided. Historically, root-finding techniques progressed from the dependable but linearly convergent —which serves as a baseline by repeatedly halving the interval—to hybrid methods that integrate bracketing safeguards with superlinear convergence for improved efficiency. This evolution addresses the trade-off between reliability and speed in practical computations.

Key Component Methods

The bisection method is a fundamental bracketing technique for root-finding that guarantees convergence under the intermediate value theorem, provided the initial interval [a, b] contains a root and f(a) and f(b) have opposite signs. It proceeds by computing the midpoint m = (a + b)/2 and evaluating f(m); the subinterval containing the root is then selected by retaining the endpoint where f maintains the opposite sign relative to f(m), effectively halving the interval length at each step. This process yields linear convergence, with the error bound reducing by a factor of $1/2 per iteration, ensuring the root is isolated within \epsilon after approximately \log_2((b - a)/\epsilon) steps. In contrast, the secant method employs linear interpolation to approximate the root without requiring bracketing guarantees, using two initial points x_{n-1} and x_n to form a secant line. The next iterate is given by x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}, which rearranges to x_{n+1} = b - f(b) \frac{b - a}{f(b) - f(a)} when using points a and b. This approach achieves superlinear convergence of order approximately 1.618 (the golden ratio), making it faster than bisection for well-behaved functions, though it demands careful initial guesses and can diverge or cycle if the points are poorly chosen. Historically, the secant method traces its origins to ancient approximations in the (circa 1650 BCE), evolving into an iterative form by the 17th century. For higher-order approximation, inverse quadratic interpolation fits a quadratic polynomial inversely—treating x as a function of y = f(x)—through three distinct points (a, f(a)), (b, f(b)), and (c, f(c)), then solves for the x where the interpolant equals zero. The resulting iterate s is s = \frac{a f(b) f(c) + b f(c) f(a) + c f(a) f(b)}{(f(b) - f(c)) f(a) + (f(c) - f(a)) f(b) + (f(a) - f(b)) f(c)}, derived from the Lagrange form of the inverse quadratic. This method offers superlinear convergence of order approximately 1.839 for smooth functions near the root, outperforming the in efficiency when applicable, but it risks failure if the denominator approaches zero or if the points lead to extrapolation outside the bracket. These component methods embody key trade-offs in root-finding: bisection prioritizes reliability through guaranteed monotonic interval reduction but sacrifices speed with its linear convergence, while the secant and inverse quadratic methods leverage interpolation for superlinear rates and fewer evaluations—typically one per step—yet compromise on robustness by lacking inherent bracketing and potentially diverging without safeguards. Hybrid approaches address this by integrating bisection's stability to maintain a bracket while opportunistically applying faster interpolation when conditions permit, balancing computational efficiency with assured convergence for practical implementation.

Dekker's Method

Core Algorithm

Dekker's method is a hybrid root-finding algorithm designed to find a zero of a continuous function f within an initial interval [a_0, b_0] where f(a_0) \cdot f(b_0) < 0, guaranteeing convergence by blending the sure but slow with the faster but potentially unreliable . The algorithm prioritizes the secant step when it promises sufficient interval reduction without violating the bracket, falling back to otherwise to maintain the sign change property. This approach ensures that the interval containing the root is monotonically reduced, typically halving it at least every other step. Originally described by T. J. Dekker in 1969, the method laid the groundwork for subsequent improvements in reliable numerical root-finding. The algorithm begins with initialization: select an initial bracket [a_0, b_0] such that f(a_0) \cdot f(b_0) < 0, and set the previous iterate b_{-1} = a_0. Here, a_k serves as the "contrapoint" maintaining the opposite sign to f(b_k), while b_k tracks the current best approximation to the root. These choices establish the framework for subsequent secant approximations using consecutive b iterates rather than the full bracket endpoints, which helps avoid issues with distant points. In each iteration k \geq 0, the algorithm computes two candidate points. The secant step s is calculated using the most recent iterates as s = b_k - \frac{(b_k - b_{k-1}) f(b_k)}{f(b_k) - f_{k-1}}, where f_{k-1} = f(b_{k-1}), providing a linear interpolation between b_{k-1} and b_k. Simultaneously, the bisection step m is computed as the midpoint m = \frac{a_k + b_k}{2}. These steps draw from the standard bisection and secant methods, ensuring the secant leverages local information for potentially superlinear progress while bisection offers a safe fallback. The selection rule determines the next iterate b_{k+1}: the secant point s is chosen if it lies strictly within the current interval (a_k, b_k), sufficiently reduces the interval length (typically by checking if the new subinterval is smaller than half the current one), and maintains the bracketed sign change; otherwise, the bisection point m is used. If the secant is selected but leads to a new bracket where the previous b_k becomes the contrapoint, update a_{k+1} = b_k. This rule balances efficiency and reliability, using secant for acceleration when safe and bisection to guarantee progress, with the contrapoint update preserving f(a_{k+1}) \cdot f(b_{k+1}) < 0. After selection, set b_{k} = b_{k+1} for the next iteration. Termination occurs when the interval width |b_k - a_k| falls below a specified tolerance \epsilon, or when |f(b_k)| \approx 0 within machine precision, ensuring the root is approximated to the desired accuracy. The algorithm's design ensures that the interval is reduced by at least a factor of $1/2 in every two steps, providing a worst-case linear convergence rate dominated by bisection.

Limitations and Challenges

One notable limitation of Dekker's method is its potential for slow convergence in certain scenarios, particularly when successive secant steps consistently select new points near the endpoint with the smaller absolute function value, causing the algorithm to degenerate into behavior akin to repeated bisections. In such cases, the method may require a quadratic number of function evaluations relative to the number of effective bisections needed, as each inefficient step fails to leverage the superlinear potential of the secant approximation, leading to O(N^2) evaluations for achieving N halvings of the interval in worst-case pathological functions. The algorithm also exhibits failure modes, including the risk of losing the bracketing interval if a secant step overshoots due to poor conditioning or non-monotonic behavior within the initial bracket, although safeguards like fallback to mitigate this in standard implementations. Additionally, the method is sensitive to the initial bracket width; wide intervals on nearly flat functions can result in excessively small steps, requiring (b - a)/δ evaluations where δ is the minimal step size, prolonging convergence unacceptably. Empirical observations from early implementations and tests confirm these issues, with the algorithm occasionally demanding up to quadratic iterations in adverse cases, such as functions with multiple roots or vanishing derivatives, far exceeding the linear efficiency of pure bisection. For instance, on ill-conditioned problems, it may take more than twice the evaluations of improved hybrids before terminating within tolerance. Furthermore, Dekker's method lacks built-in safeguards against inefficient paths, such as explicit checks ensuring that each step reduces the interval by at least a fraction of the previous reduction, allowing sequences of near-stagnant progress without intervention. This absence contributes to its vulnerability in non-well-behaved functions, where no mechanism enforces superlinear progress or bounds the deviation from bisection-like performance.

Brent's Method

Key Modifications

Brent's method, introduced by Richard P. Brent in 1971, builds upon Dekker's earlier algorithm by incorporating targeted enhancements to improve robustness and convergence speed while maintaining guaranteed progress toward root bracketing. These modifications address occasional stagnation and unreliable steps in Dekker's method by prioritizing safer interpolation choices and stricter interval controls. A primary enhancement is the addition of inverse quadratic interpolation, which is employed when three distinct points are available, allowing a quadratic fit to the function values for a more accurate approximation than the linear secant step used otherwise. This approach avoids the computational overhead of direct quadratic solving, such as square roots, and typically accelerates convergence without risking bracket violation. To prevent overshooting and ensure steady interval reduction, Brent introduced tolerance and bracketing safeguards: the proposed iterate s from interpolation is accepted only if it lies within the current bracket [a, b], the resulting new interval is smaller than the previous (typically checked against half the prior reduction to avoid stagnation), and it satisfies conditions based on tolerance, previous step size, and interpolation parameters to guarantee progress; if these fail, the algorithm defaults to bisection, ensuring reliable bracketing. Additionally, Brent's design bounds iterations such that, for an initial interval requiring N bisections to meet tolerance, the total function evaluations are at most three times those required by bisection (≈3N), achieved by favoring interpolation only when safeguards pass and otherwise using safe bisection steps. For continuously differentiable functions, these changes yield superlinear convergence, combining the surety of bisection with the rapidity of higher-order methods.

Mathematical Foundations

Brent's method relies on inverse quadratic interpolation to approximate the root of a continuous function f(x) within a bracketed interval where f(a) and f(b) have opposite signs, ensuring f(a) f(b) \leq 0. When three distinct points a, b, and c are available with known function values, the method fits a quadratic polynomial to the inverse function x = p(y), where y = f(x), satisfying p(f(a)) = a, p(f(b)) = b, and p(f(c)) = c. The root estimate s is then p(0), given by the explicit formula s = \frac{a f(b) f(c) + b f(c) f(a) + c f(a) f(b)}{f(a)(f(b) - f(c)) + f(b)(f(c) - f(a)) + f(c)(f(a) - f(b))}, provided the denominator is nonzero, which occurs when the points are not collinear in the (x, f(x)) plane. If the points are collinear—indicated by a zero denominator or identical function values—the inverse quadratic step is invalid, and the method reverts to the secant method using two points, say a and b, to compute s = b - f(b) \frac{b - a}{f(b) - f(a)}. This secant approximation assumes f(a) \neq f(b); otherwise, it falls back to bisection, yielding the midpoint m = (a + b)/2. These update rules prioritize higher-order interpolation when reliable, ensuring robustness by degrading gracefully to linear or constant-step approximations. Bracketing is maintained throughout by swapping endpoints if necessary after each step, preserving f(a) f(b) \leq 0 and typically selecting the point with the smaller absolute function value as the new a to minimize evaluation costs. The method guarantees interval reduction: in the worst case, bisection halves the bracket length, while successful inverse quadratic or secant steps often reduce it more rapidly, achieving superlinear convergence with order approximately 1.839 under suitable conditions. This hybrid approach ensures the error bound decreases monotonically, with the new interval contained within the previous one, providing reliable containment of the root.

Algorithm Description

Pseudocode

Brent's method requires an initial bracket [a, b] where f(a) and f(b) have opposite signs, ensuring a root lies within the interval. The algorithm proceeds by iteratively narrowing the bracket using a combination of bisection, linear (secant), and inverse quadratic interpolation, with safeguards to maintain bracketing and avoid division by zero. The following pseudocode implements the full algorithm, including bracket shrinking if the initial interval does not satisfy the sign condition. This shrinking is an extension for robustness, as the core method assumes a valid bracket.
pseudocode
function brent(f, a, b, tol, max_iter):
    fa = f(a)
    fb = f(b)
    // Initial swap to ensure |f(b)| <= |f(a)|
    if abs(fa) < abs(fb):
        swap a, b
        swap fa, fb
    if fa * fb > 0:
        // Shrink bracket until sign change occurs
        while fa * fb > 0 and abs(b - a) > tol:
            m = (a + b) / 2
            fm = f(m)
            if abs(fm) < tol:
                return m
            if fa * fm > 0:
                a = m
                fa = fm
            else:
                b = m
                fb = fm
        if fa * fb > 0:
            raise Error("No sign change: f(a) and f(b) have the same sign")
    c = a
    fc = fa
    e = abs(b - a)  // Previous step size
    d = e
    for i = 1 to max_iter:
        if abs(b - a) < tol or abs(fb) < tol:
            return b
        // Check signs and swap if necessary to maintain |f(b)| <= |f(a)|
        if fb * fc > 0:
            c = a
            fc = fa
            e = d
            d = b - a
        else if abs(fc) < abs(fb):
            // Swap to make |f(b)| the smallest
            a = b
            fa = fb
            b = c
            fb = fc
            c = a  // Old b becomes c? Wait, adjust accordingly
            // Note: full swap logic as in standard impl.
        tol1 = 2 * tol * abs(b) + tol / 2  // Adjusted for consistency
        xm = (c - b) / 2
        if abs(xm) <= tol1 or fb == 0:
            return b
        if abs(e) >= tol1 and abs(fa) > abs(fb):
            // Attempt interpolation
            s = fb / fa
            if a == c:  // Secant method
                p = 2 * xm * s
                q = 1 - s
            else:  // Inverse quadratic interpolation
                q_temp = fa / fc
                r = fb / fc
                p = s * (2 * xm * q_temp * (q_temp - r) - (b - a) * (r - 1))
                q = (q_temp - 1) * (r - 1) * (s - 1)
            if p > 0:
                q = -q
            p = abs(p)
            min1 = 3 * xm * abs(q) - tol1 * abs(q)
            min2 = abs(e * q)
            if 2 * p < min(min1, min2):
                e = d
                d = p / q
            else:
                e = b - a
                d = e / 2
        else:
            e = b - a
            d = e / 2
        // Safeguards already incorporated in the min
        a = b
        fa = fb
        if abs(d) > tol1:
            b = b + d
        else:
            b = b + sign(xm) * tol1
        fb = f(b)
    raise Error("Maximum iterations exceeded")
This implementation ensures guaranteed convergence by always maintaining a bracketed root and falling back to bisection when interpolation steps are unsafe or invalid. The inverse quadratic step uses the formula for the inverse function interpolation to select the trial point, with parameters p and q derived from the function values at a, b, and c. The logic aligns with standard references, including initial and ongoing swaps to prioritize the point with smaller |f|.

Step-by-Step Execution

The execution of Brent's method commences with bracket validation to confirm that the initial interval [a, b] contains a , verified by checking that f(a) and f(b) have opposite , ensuring f(a)f(b) < 0 as required by the intermediate value theorem. If this condition fails, the algorithm cannot proceed reliably without adjustment, such as swapping a and b or selecting new endpoints, though typical implementations presuppose a valid bracket to initiate the process. An initial swap ensures |f(b)| ≤ |f(a)|, setting b as the better approximation. Point management maintains three key values: a as the contrapoint where f(a) opposes the sign of f(b), b as the current best root approximation selected such that |f(b)| ≤ |f(a)|, and c as the prior value of b to enable interpolation using historical information. After evaluating a new trial point s, updates occur as follows: if f(s) = 0, return s; if f(a)f(s) < 0, the bracket shrinks to [a, s] by setting the new a to a and the new b to s (c set to old b); otherwise (f(s)f(b) < 0), the bracket shrinks to [s, b] by setting the new a to s and the new b to b (c set to old a). Then, if |f(new b)| > |f(new a)|, swap new a and new b to maintain the |f(b)| ≤ |f(a)|. This preserves the sign change across the interval while prioritizing the point with the smaller function value as the new best guess. Step selection logic favors inverse quadratic interpolation when three distinct points (a, b, c) are available and sufficiently non-collinear, as this higher-order approximation accelerates convergence by exploiting curvature information beyond linear methods. If c coincides with a or the points are nearly collinear (indicating unreliable quadratic fit), the is applied using only a and b for a simpler linear extrapolation. Should either interpolation yield an invalid step (e.g., outside the bracket or too small/large), the algorithm defaults to bisection at the m = (a + b)/2, which guarantees interval halving and bracket preservation. This prioritization enhances efficiency while falling back to robust alternatives when faster steps risk failure. Interpolation is specifically attempted when |f(a)| > |f(b)| and the previous step was substantial. Safeguards reject a proposed step s if it falls outside the protected subinterval [min(a, b) + ε, max(a, b) – ε] to mitigate floating-point errors near boundaries, or if |s – b| exceeds half the current interval width |b – a| (preventing overshoots that could lose the ), or if |s – b| is disproportionately small relative to prior progress (avoiding negligible advancements). In these scenarios, replaces the step to ensure measurable error reduction. These constraints maintain stability and prevent the algorithm from escaping the root-containing region. Iteration control monitors the previous step size e (initially |b – a|) to detect stagnation; if the prospective step |d| falls below a tolerance relative to the bracket size, the method enforces a step to compel substantial contraction. The process until |b – a| ≤ tolerance or |f(b)| ≈ 0, with safeguards like a maximum iteration cap to halt non-convergent cases. This tracking ensures consistent progress by intervening against repetitive or diminishing steps that fail to localize the root effectively.

Analysis and Properties

Convergence Behavior

Brent's method exhibits superlinear convergence for smooth functions, with an order between 1 and 2, approaching the speed of the near the root. This behavior arises from its hybrid use of for reliability and higher-order techniques for acceleration once sufficient points are available. The algorithm guarantees a reduction in the bracketing interval by at least half every two steps, ensuring finite termination. Specifically, achieving a of $2^{-N} requires at most O(N^2) function evaluations, providing a quadratic bound on the number of iterations relative to the logarithmic precision. In the worst case, it reverts to , yielding linear , but typically achieves faster progress through . Convergence relies on the f being continuous on the initial [a, b] with f(a) and f(b) having signs, thereby a root by the . The method may fail or converge slowly if the has discontinuities within the or if multiple prevent proper after initial steps. Asymptotically, during phases dominated by quadratic interpolation, the error satisfies |e_{k+1}| \approx C |e_k|^{\phi}, where \phi \approx 1.839 is the convergence order for inverse quadratic interpolation.

Comparative Performance

Brent's method offers a significant improvement over the pure by incorporating techniques, which typically require far fewer function evaluations to achieve the same precision while preserving the guaranteed within the initial bracketing interval. In contrast to bisection's linear of 1, Brent's superlinear of approximately 1.839 allows it to halve the interval more rapidly on average, making it substantially faster for most practical functions without sacrificing reliability. Compared to the and , Brent's method maintains strict to prevent divergence or slow in cases where one stagnates, such as with functions exhibiting flat regions or near-zero . While the achieves a similar superlinear rate of about 1.618, it lacks guarantees and can fail or require many iterations on ill-behaved functions; Brent's hybrid approach delivers comparable speed but enhanced robustness by falling back to when interpolation risks instability. As an evolution of Dekker's method, Brent's algorithm addresses key pathologies of Dekker's method, such as excessive iterations from failed steps, by using and tighter safeguards, often requiring fewer iterations overall. Among modern bracketed variants, Brent's method is comparable in efficiency to Ridders' method, which also employs exponential fitting for quadratic-like , but Brent's implementation is simpler and more widely adopted due to its balanced handling of pathological cases. It can outperform Chandrupatla's 1997 hybrid in certain ill-conditioned scenarios, such as those with sharp curvature changes, where Chandrupatla's selective may underperform, though both exhibit strong average performance across test functions. Empirical benchmarks highlight Brent's effectiveness, particularly for functions with points or abrupt shifts, where it dynamically switches to to avoid slowdowns seen in pure methods, often converging in under 15 evaluations for intervals of moderate width. However, for analytic functions where derivatives are readily available, Brent's method is slower than derivative-based approaches like Newton's, which achieves (rate 2) and typically requires half as many iterations near the .

Practical Aspects

Numerical Example

To illustrate the operation of Brent's method, consider the function f(x) = (x - 1)[1 + (x - 1)^2], which has a root at x = 1. The initial bracketing interval is [a, b] = [0, 3], where f(0) = -2 < 0 and f(3) = 10 > 0. The method proceeds by combining bisection, secant, and inverse quadratic interpolation steps, with a tolerance of $10^{-5}. The following table shows the key iterates, where the column b represents the current left endpoint of the bracketing interval (where f(b) < 0), f(b) is the function value there, and c is the auxiliary point used for interpolation (initially the right endpoint at 3.0). The process converges to the root after 6 iterations as shown in the source, with the final approximation b \approx 1 and |f(b)| < 10^{-7}. A seventh iteration is illustrative, showing the update of c near the root to collapse the bracket.
Iteration nbf(b)c
00.0-2.00 × 10⁰3.0
10.5-6.25 × 10⁻¹3.0
20.7139038-3.10 × 10⁻¹3.0
30.9154507-8.52 × 10⁻²3.0
40.9901779-9.82 × 10⁻³3.0
50.9998567-1.43 × 10⁻⁴3.0
60.9999999-1.19 × 10⁻⁷3.0
7 (illustrative)0.9999999-1.19 × 10⁻⁷1.000010
In this example, Brent's method predominantly employs steps, as evidenced by the auxiliary point c remaining fixed at the initial right endpoint 3 for the first six iterations, allowing efficient reduction via or inverse quadratic approximations. Bisection is used sparingly when interpolation conditions are not met, contributing to the nature of the algorithm. The bracketing interval shrinks gradually from an initial length of 3 to about 2 over the first six iterations, and then rapidly to approximately $10^{-7} in the final illustrative step when c updates near the , demonstrating . A plot of f(x) over [0, 3] would show a cubic starting negative at x=0, crossing at x=, and rising positively thereafter; the iterates trace a path approaching the from below, with points clustering near x= as occurs.

Software Implementations

Brent's method is implemented in several widely used numerical libraries across programming languages, providing robust tools for one-dimensional root-finding in scientific and applications. In , the library offers the scipy.optimize.brentq , which implements Brent's method for finding roots of scalar functions within a bracketing interval where the changes sign. This accepts a callable objective f along with initial bounds a and b, and by default uses an absolute tolerance of xtol=2e-12 and a relative tolerance of approximately 1e-15, ensuring high precision for most applications. The maximum number of iterations is set to 100 by default, which can be adjusted for functions requiring more steps. The GNU Scientific Library (GSL) for C and C++ includes gsl_root_fsolver_brent, a solver that applies Brent's method to bracketed root-finding problems. As part of GSL version 2.8, released in , this implementation supports user-defined functions and provides iteration controls for convergence monitoring. It is particularly valued in for its efficiency and integration with other GSL numerical routines. MATLAB's fzero function employs Brent's method as its core algorithm when a bracketing interval is provided, combining , , and inverse quadratic interpolation for reliable convergence. Available since early releases and refined in version R2016b for improved performance, fzero handles scalar functions efficiently, making it suitable for optimization workflows in large-scale simulations. Other languages feature dedicated implementations as well. In , the Roots.jl package provides the Brent() solver via find_zero(f, (a, b), Brent()), supporting and offering options for tolerance and iteration limits in environments. For , the roots crate includes find_root_brent, a pure-Rust implementation that requires an initial bracket and is designed for numerical reliability without external dependencies. Brent's method is also commonly integrated into embedded systems libraries, such as those in microcontroller firmware, due to its derivative-free nature and guaranteed convergence, ensuring stability in resource-constrained settings. When using these implementations, best practices include setting the maximum iterations to more than 100 for potentially ill-conditioned functions to avoid premature termination, and incorporating a preliminary search—such as or a derivative-based scan—to establish intervals when the initial bounds do not guarantee a sign change. These steps enhance robustness across diverse applications, from to control systems.

References

  1. [1]
    [PDF] Brent's Method - WPI
    Brent's method for approximately solving f(x)=0, where f : R → R, is a “hybrid” method that combines aspects of the bisection and secant methods with some ...
  2. [2]
    Optimization and Non-linear Methods - Duke People
    Brent's method is a combination of bisection, secant and inverse quadratic interpolation. Like bisection, it is a 'bracketed' method (starts with points (a,b) ...
  3. [3]
    [PDF] Brent (1971) - Mathematical Sciences Institute, ANU
    An algorithm is presented for finding a zero of a function which changes sign in a given interval. The algorithm combines linear interpolation and inverse ...
  4. [4]
    Reference Manual - One dimensional Root-Finding
    The Brent-Dekker method (referred to here as Brent's method) combines an interpolation strategy with the bisection algorithm. This produces a fast algorithm ...
  5. [5]
    [PDF] Lecture 38: Bracketing Algorithms for Root Finding
    Nov 29, 2009 · Given a function f : R → R, we seek a point x∗ ∈ R such that f(x∗) = 0. This x∗ is called a root of the equation f(x) = 0, or simply a zero of ...Missing: definition | Show results with:definition
  6. [6]
    Intermediate Value Theorem, location of roots - Math Insight
    Using the Intermediate Value Theorem to find small intervals where a function must have a root.
  7. [7]
    MATHEMATICA TUTORIAL, Part 1.3: Open Methods
    The root finding methods are applied for solving nonlinear equations of the form f(x) = 0. These are the problems of applied mathematics that occur most often ...
  8. [8]
    [PDF] Chapter 9. Root Finding and Nonlinear Sets of Equations
    9.1 Bracketing and Bisection​​ If you are given a function in a black box, there is no sure way of bracketing its roots, or of even determining that it has roots ...Missing: suitable | Show results with:suitable
  9. [9]
    [PDF] Hybrid Methods for Root Finding - arnold@uark.edu
    Finally note that in Brent's method, if f(pk-2), f(pk-1) and f(pk) are pairwise distinct, then we compute q as q = Q2(0), where Q2 is the Lagrange interpolator ...
  10. [10]
    None
    ### Summary of Root-Finding Methods
  11. [11]
    [PDF] Root-Finding Methods - Karen A. Kopecky
    Root-finding methods find x where f(x) = 0, often using numerical methods. The bisection method is a robust example for one-dimensional functions.
  12. [12]
    [PDF] Origin and Evolution of the Secant Method in One Dimension
    Feb 27, 2014 · Abstract. Many in the mathematical community believe that the secant method arose from. Newton's method using a finite difference ...
  13. [13]
    [PDF] RP Brent: AN ALGORITHM WITH GUARANTEED CONVERGENCE ...
    Brent, “An algorithm with guaranteed convergence for finding a zero of a function”, Computer J. 14. (1971), 422–425. ... Received August 1970, revised March 1971.
  14. [14]
    An algorithm with guaranteed convergence for finding a zero of a ...
    DEKKER, T. J. (1969). Finding a zero by means of successive linear interpolation, in Constructive aspects of the fundamental theorem of algebra, edited by B.
  15. [15]
    [PDF] Algorithms for Finding Zeros and Extrema of Functions Without ...
    Dekker (1969))« Our algorithm never converges much slower than bisect3 on, whereas Dekker's algorithm may converge extremely slowly in certain cases.
  16. [16]
    algorithm with guaranteed convergence for finding a zero of a function
    Jan 1, 1971 · An algorithm with guaranteed convergence for finding a zero of a function. R. P. Brent.
  17. [17]
  18. [18]
    brentq — SciPy v1.16.2 Manual
    Brent's method combines root bracketing, interval bisection, and inverse quadratic interpolation. It is sometimes known as the van Wijngaarden-Dekker-Brent ...Brentq · 1.10.1 · 1.7.0 · 1.13.1Missing: pseudocode | Show results with:pseudocode
  19. [19]
    [PDF] 9.3 Van Wijngaarden–Dekker–Brent Method
    Brent's method combines root bracketing, bisection, and inverse quadratic interpolation, combining the sureness of bisection with the speed of higher-order  ...
  20. [20]
    [PDF] 9.3 Van Wijngaarden–Dekker–Brent Method
    Using Brent's method, find the root of a function func known to lie between x1 and x2. The root, returned as zbrent, will be refined until its accuracy is tol.Missing: original | Show results with:original
  21. [21]
    Brent's method outperforms TOMS 748 - Savannah.nongnu.org
    The classic algorithm for finding a root of a univariate equation is Brent's method, which Press (2007, p. 455) names the “method of choice”: Brent's method ...
  22. [22]
    An Intuitive Guide to Numerical Methods - Brian Heinold
    Brent's method is one of the most widely used root-finding methods. You ... If numerical root-finding methods A and B have rates of convergence of 1.8 ...
  23. [23]
    A new hybrid quadratic/bisection algorithm for finding the zero of a ...
    April 1997, Pages 145-149 ... The algorithm has been tried on several test functions and compared with Brent's algorithm for the same convergence criteria.
  24. [24]
    [PDF] AN INTRODUCTION TO NUMERICAL ANALYSIS Second Edition ...
    Mar 2, 2012 · ... numerical example with a= 3, x0 = 2, and a= f3 = 1.732051. With x0 ... Brent's method b. /(b). 0.0. -2.00E + 0. 0.5. -6.25E -1 .7139038.
  25. [25]
    One Dimensional Root-Finding — GSL 2.8 documentation - GNU.org
    The first program uses the function solver gsl_root_fsolver_brent for Brent's method and the general quadratic defined above to solve the following equation,.
  26. [26]
    GSL - GNU Scientific Library - GNU Project - Free Software Foundation
    The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. It is free software under the GNU General Public License.GSL 2.8 documentation · Here. · Of /gnu/gsl · Design document
  27. [27]
    fzero - Root of nonlinear function - MATLAB - MathWorks
    The algorithm, created by T. Dekker, uses a combination of bisection, secant, and inverse quadratic interpolation methods. An Algol 60 version, with some ...Description · Examples · Input Arguments · Output Arguments
  28. [28]
    find_root_brent in roots - Rust - Docs.rs
    Find a root of the function f(x) = 0 using the Brent method. Pro. Fast; Robust; No need for derivative function. Contra. Complicated; Needs initial bracketing ...