Fact-checked by Grok 2 weeks ago

Secant method

The secant method is an iterative numerical algorithm for approximating the roots of a continuous real-valued f(x) = 0 by successively refining estimates using lines, which are chords connecting pairs of points on the 's graph to extrapolate the root's location. Unlike the Newton-Raphson method, it does not require the computation of s, relying instead solely on evaluations, making it particularly advantageous when derivative information is unavailable or expensive to obtain. The algorithm initiates with two distinct initial guesses, x_0 and x_1, and proceeds iteratively according to the
x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})},
where x_{n+1} is the x-intercept of the passing through the points (x_{n-1}, f(x_{n-1})) and (x_n, f(x_n)). This geometric interpretation leverages similar triangles to approximate the function's behavior linearly between points. Iterations continue until the change in successive approximations falls below a predefined , such as the relative error \epsilon_a = 100 \left| \frac{x_{n+1} - x_n}{x_{n+1}} \right| < desired precision. The method is classified as an open method, meaning the initial guesses need not bracket the root, though convergence is not guaranteed without appropriate starting values.
Convergence of the secant method is superlinear, achieving an asymptotic order of \phi \approx 1.618, the golden ratio, which arises from the characteristic equation of its error recurrence and is rigorously established for functions that are twice continuously differentiable near a simple root with initial approximations sufficiently close. This rate surpasses the linear convergence of methods like the but is inferior to the quadratic order of ; however, the secant method often proves more efficient in practice due to requiring only one new function evaluation per iteration compared to two for (including the derivative). Limitations include potential divergence or slow convergence if initial guesses are poorly chosen or if the function exhibits inflection points near the root. Historically, the secant method originates from the ancient Egyptian "Rule of Double False Position" documented in the Rhind Papyrus circa 1650 B.C., initially applied to linear equations, and evolved through Chinese (Nine Chapters on the Mathematical Art, ca. 200 B.C.), Arab (Al-Khwarizmi, 9th century A.D.), and European developments, including iterative extensions by Cardano in 1545 that enabled its use for nonlinear problems. The modern terminology "secant method" and formal analysis of its convergence order were established in the mid-20th century by T.A. Jeeves in 1958. Today, it remains a foundational tool in numerical analysis, with extensions like the Regula Falsi variant ensuring bracketing and modern quasi-Newton generalizations for systems of equations.

Introduction and Background

Overview of the Method

The secant method is an iterative quasi- algorithm designed to find roots of univariate functions f(x) = 0. It approximates the root by successively generating better estimates through linear interpolation between pairs of points on the function graph, making it particularly useful for numerical root-finding problems where explicit derivative information is unavailable or costly to compute. At its core, the method relies on secant lines—straight-line approximations connecting two points on the curve—to predict the next root estimate, refining the approximation iteratively as the points converge toward the actual root. Unlike methods that require only a single initial guess, the secant method demands two starting points, x_0 and x_1, which are typically chosen close to the expected root location to promote efficient progress. This dual-initial approach allows the algorithm to proceed without evaluating derivatives, relying solely on function values, which enhances its applicability to smooth, continuous functions where the behavior near the root is sufficiently well-behaved. The secant method demonstrates efficiency in practice for such functions, often achieving superlinear convergence rates that outperform purely linear methods while avoiding the computational overhead of derivative calculations seen in Newton's method. It is closely related to the method of false position (also known as ), which shares the secant line concept but incorporates bracketing to guarantee containment of the root within an interval, whereas the standard secant method operates more openly without this constraint.

Historical Development

The secant method traces its roots to ancient numerical techniques for solving equations, particularly the method of false position, which approximates roots by linearly interpolating between two points where the function changes sign. This approach is documented in Babylonian clay tablets dating to approximately 2000 BCE, where it was applied to linear equations in practical contexts such as land measurement and commerce. Similarly, the Egyptian Rhind Papyrus, composed around 1650 BCE (though based on earlier material from 2000–1800 BCE), describes the Rule of Double False Position for solving linear problems, marking an early systematic use of secant-like interpolation. During the medieval and Renaissance periods, the method spread and evolved through cultural exchanges. In ancient China around 200 BCE, it appeared as "ying bu zu shu" in the text Nine Chapters on the Mathematical Art, used for solving linear systems. By the 9th century CE in the Arab world, mathematicians like Abu Jafar Mohammad ibn-Musa al-Khwarizmi and Abu Kamil Shuja employed it under the name "Hisab al Khata’ayn" for algebraic equations. In Europe, Fibonacci introduced it in 1202 in Liber Abaci as "Elchataytm," derived from Arabic sources. The 15th and 16th centuries saw further refinements: Nicolas Chuquet named it the "Rule of Two False Positions" in 1484, Petrus Apianus termed it "Regula Falsi" in 1527, and Girolamo Cardano made it explicitly iterative in his 1545 work Ars Magna, extending its application to nonlinear equations. In the 17th century, Isaac Newton refined a similar interpolation technique in his unpublished 1665 manuscript The Waste Book, using finite differences to approximate roots without derivatives, though he did not name it the secant method. The 19th century brought formalization within emerging numerical analysis texts, where secant iteration gained prominence as a practical alternative to derivative-based methods, appearing in works by European mathematicians building on Cardano's iterative framework. By the 20th century, the method received its modern nomenclature in 1958 when T.A. Jeeves explicitly called it the "secant method" and analyzed its convergence rate, approximately the golden ratio (1.618). In the 1960s, it was recognized as a quasi-Newton approach, with C.G. Broyden extending it to multivariable systems in his 1965 paper, influencing optimization algorithms. The secant method's derivative-free property has sustained its relevance in contemporary computing, where evaluating derivatives is costly or infeasible, making it a staple in software libraries for root-finding despite the rise of more advanced techniques.

Algorithm and Derivation

Step-by-Step Procedure

The secant method is an root-finding algorithm for solving f(x) = 0, where f is a continuous function, requiring two initial approximations x_0 and x_1 close to the root. A tolerance \epsilon > 0 defines the desired precision, and a maximum count N guards against non-convergence. The core iteration, for n \geq 2, updates the approximation using the through the points (x_{n-2}, f(x_{n-2})) and (x_{n-1}, f(x_{n-1})): x_n = x_{n-1} - f(x_{n-1}) \frac{x_{n-1} - x_{n-2}}{f(x_{n-1}) - f(x_{n-2})} This formula approximates the f'(x_{n-1}) via the \frac{f(x_{n-1}) - f(x_{n-2})}{x_{n-1} - x_{n-2}}. is checked after each iteration using criteria such as |x_n - x_{n-1}| < \epsilon (change in approximation) or |f(x_n)| < \epsilon (residual), whichever is appropriate for the problem; if neither holds and n < N, the loop continues, otherwise the process stops with x_n as the approximate root. The following pseudocode outlines the procedure:
INPUT: function f, initial x0, x1, tolerance ε, maximum iterations N
OUTPUT: approximate root or message

Set i = 1
Set x_{i} = x0, x_{i+1} = x1

WHILE i < N DO
    IF |f(x_{i+1}) - f(x_i)| < δ (small threshold, e.g., machine epsilon) THEN
        OUTPUT "Division by near-zero; perturb initials and restart"
        STOP
    END IF
    
    Compute x_{i+2} = x_{i+1} - f(x_{i+1}) * (x_{i+1} - x_i) / (f(x_{i+1}) - f(x_i))
    
    IF |x_{i+2} - x_{i+1}| < ε OR |f(x_{i+2})| < ε THEN
        OUTPUT x_{i+2}
        STOP
    END IF
    
    Set i = i + 1
    Set x_i = x_{i+1}
    Set x_{i+1} = x_{i+2}
END WHILE

OUTPUT "Maximum iterations reached; approximate root x_{N+1}"
This implementation shifts indices for clarity, with the threshold \delta to detect near-zero denominators. An algebraically equivalent form of the update formula, often preferred for numerical stability to minimize cancellation errors when x_{n-1} \approx x_{n-2}, is: x_n = \frac{x_{n-2} f(x_{n-1}) - x_{n-1} f(x_{n-2})}{f(x_{n-1}) - f(x_{n-2})} This rearranges the expression to avoid subtracting nearly equal values in both numerator and denominator. If f(x_{n-1}) = f(x_{n-2}), the denominator is zero, causing failure; a practical safeguard is to slightly perturb one approximation (e.g., add a small multiple of machine epsilon to x_{n-1}) and resume, or fall back to a bracketing method like bisection if the function values have the same sign.

Mathematical Derivation

The secant method derives from approximating the function f(x) near a root using a straight line, known as the , that passes through two points on the graph of f: (x_{n-2}, f(x_{n-2})) and (x_{n-1}, f(x_{n-1})). This geometric approach assumes the function is sufficiently smooth near the root, allowing the linear interpolation to provide a reasonable estimate of the root's location where the line intersects the x-axis. The equation of the secant line is given by f(x) \approx \frac{f(x_{n-1}) - f(x_{n-2})}{x_{n-1} - x_{n-2}} (x - x_{n-2}) + f(x_{n-2}), which represents the linear function interpolating the two points. To find the next approximation x_n, set this expression equal to zero and solve for x: $0 = \frac{f(x_{n-1}) - f(x_{n-2})}{x_{n-1} - x_{n-2}} (x_n - x_{n-2}) + f(x_{n-2}). Rearranging yields the iterative formula x_n = x_{n-1} - \frac{f(x_{n-1}) (x_{n-1} - x_{n-2})}{f(x_{n-1}) - f(x_{n-2})}. This form arises directly from the geometry of the secant line and assumes f is differentiable with f'(r) \neq 0 at the root r, ensuring the slope is nonzero and the approximation is valid. An equivalent algebraic rearrangement, often preferred for numerical stability to minimize subtraction errors when function values have similar magnitudes, is the fractional form: x_n = \frac{x_{n-2} f(x_{n-1}) - x_{n-1} f(x_{n-2})}{f(x_{n-1}) - f(x_{n-2})}. This expression avoids explicit division by small differences in the denominator by restructuring the computation. The secant slope \frac{f(x_{n-1}) - f(x_{n-2})}{x_{n-1} - x_{n-2}} serves as a finite-difference approximation to the derivative f'(x_{n-1}), linking the method to where the derivative is approximated without direct evaluation. This approximation holds under the differentiability assumption, enabling the secant method to mimic derivative-based updates using only function evaluations.

Theoretical Properties

Convergence Behavior

The secant method exhibits local convergence to a simple root \alpha of a function f under the assumptions that f is continuous, f'(\alpha) \neq 0, and the initial guesses x_0 and x_1 are sufficiently close to \alpha. Specifically, if f is twice continuously differentiable in a neighborhood of \alpha with |f'(x)| \geq C_1 > 0 and |f''(x)| \leq C_2 near \alpha, the sequence \{x_n\} converges to \alpha. The method achieves superlinear with asymptotic order \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618, known as the , which arises from solving the p^2 - p - 1 = 0 for the error . This order is derived by assuming the errors satisfy e_{n+1} \approx K e_n^p e_{n-1}^{1-p} for some constant K, leading to the unique positive solution p = \phi. The precise error relation is approximately e_n \approx C e_{n-1}^\phi e_{n-2}^{1-\phi}, where e_k = x_k - \alpha and C depends on f'(\alpha) and f''(\alpha). There is no global convergence guarantee for the secant method, as its behavior strongly depends on the choice of initial points; poor selections can lead to , particularly if the is flat or oscillatory near the . In terms of iteration efficiency, it typically requires fewer steps than the (order 1) but more than (order 2) for well-behaved , while using only evaluations. The convergence speed is influenced by the 's properties, accelerating for where lines provide good approximations, but slowing near points where the may falter.

Error Analysis

The error analysis of the secant method relies on expansions to derive the propagation of errors in the iterates. Let e_n = x_n - \alpha, where \alpha is a simple root of f(x) = 0 with f'(\alpha) \neq 0 and f twice continuously differentiable near \alpha. Expanding f(x_n) and f(x_{n-1}) around \alpha, and substituting into the secant update formula x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}, yields the error recurrence e_{n+1} = \frac{f''(\xi_n)}{2 f'(\alpha)} e_n e_{n-1} + O(e_n^2 e_{n-1} + e_n e_{n-1}^2) for some \xi_n between the iterates, neglecting higher-order terms. From this recurrence, the asymptotic error constant emerges for simple roots as \lim_{n \to \infty} \frac{e_{n+1}}{e_n^\phi e_{n-1}^{1-\phi}} = -\frac{f''(\alpha)}{2 f'(\alpha)}, which links the error growth to the second derivative via the Taylor expansion and characterizes the superlinear convergence rate of order (1 + \sqrt{5})/2 \approx 1.618. In the worst case, the error bound satisfies |e_{n+1}| \leq K |e_n| |e_{n-1}|, where K = \frac{1}{2} \max |f''(x)| / \min |f'(x)| over a neighborhood of \alpha. In the bracketing variant known as the false position method, which maintains an containing the , error bounds are tighter due to the guaranteed monotonic reduction in interval length, though is typically linear rather than superlinear. The method exhibits sensitivity to initial conditions, particularly when |f(x_0)/f(x_1)| is extreme, as this amplifies the step size in early iterations and can lead to or slow if the poorly approximates f'(\alpha). Computational errors arise primarily from round-off in the slope calculation [f(x_n) - f(x_{n-1})] / (x_n - x_{n-1}), introducing perturbations of order O(\epsilon_{\text{machine}}) that propagate through the recurrence and degrade accuracy in finite-precision arithmetic. To monitor convergence reliably, practitioners should track |f(x_n)| as the primary indicator of proximity to the root, supplemented by relative changes in x_n, rather than absolute differences between successive iterates, which may mislead due to the method's irregular step sizes.

Comparisons and Extensions

Comparison to Other Root-Finding Algorithms

The secant method offers a derivative-free alternative to Newton's method, which relies on computing the function's derivative at each iteration, often costly for complex or implicitly defined functions. While Newton's method achieves quadratic convergence (order 2), enabling it to roughly double the number of correct digits per iteration, the secant method converges superlinearly with order \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618, requiring more iterations to reach the same precision but avoiding derivative evaluations altogether. This trade-off makes the secant method preferable when derivatives are expensive to obtain or unavailable, though Newton's superior local efficiency often leads to fewer total function evaluations in practice if derivatives are readily computable. In contrast to the , which guarantees monotonic convergence by repeatedly halving an initial interval containing the , the secant method lacks this bracketing guarantee and can diverge if initial guesses straddle the root poorly. Bisection's linear convergence is reliable but slow, typically requiring about n iterations to reduce the error interval by a factor of $2^{-n}, whereas the secant method's superlinear rate allows faster progress without enforced bracketing. Thus, bisection suits scenarios demanding assured convergence, such as when the root's location is uncertain, while the secant method prioritizes speed at the risk of non-convergence. The secant method and the false position method (regula falsi) both approximate using lines between two points, but the false position method maintains a interval by discarding the point on the same side of the , ensuring reliable similar to . This one-sided update often results in slower, linear for the false position method, especially near where one stagnates, compared to the secant method's superlinear order of approximately 1.618. The secant method's freedom to update both enables faster progress but without the safety net of , making false position more robust for functions with sign changes. Brent's method hybridizes the secant method's interpolation with bisection's bracketing and inverse quadratic interpolation for enhanced robustness, guaranteeing convergence within a bounded interval while achieving superlinear rates comparable to or better than the secant method alone. Although more complex to implement, Brent's approach mitigates the secant method's vulnerability to divergence and overflow issues, combining the speed of secant updates with bisection's safety when needed. The secant method remains simpler and sufficient for well-behaved functions where initial guesses are reasonable. In terms of efficiency, the secant method requires approximately 4.8 function evaluations per additional correct decimal digit asymptotically, outperforming 's approximately 3.3 evaluations per digit when derivatives are not counted as separate evaluations, though this reverses if derivative computation costs equate to a function evaluation. Both methods use one new function evaluation per iteration after initialization for the secant method (initial two evaluations), and one function plus one derivative for Newton, but the secant method's lower order necessitates more steps overall. The is ideal for problems where evaluation is prohibitively expensive, such as in large-scale simulations or when the is given by a black-box model, but it performs poorly for non-monotonic functions or those with multiple roots, where divergence risks increase without safeguards.

Generalizations and Variants

The secant method extends to multi-dimensional root-finding problems for solving systems of nonlinear equations F(\mathbf{x}) = \mathbf{0}, where \mathbf{x} \in \mathbb{R}^n. A key is , a quasi-Newton approach that approximates the J_k while satisfying the multi-dimensional secant condition J_k (\mathbf{x}_{k+1} - \mathbf{x}_k) = -F(\mathbf{x}_{k+1}). This update uses a rank-one modification to the previous Jacobian estimate, ensuring superlinear convergence under suitable conditions without requiring explicit Jacobian evaluations or inversions at each step. The method was introduced as part of a class of updates for nonlinear systems, balancing computational efficiency and accuracy. Another variant, the modified secant method, addresses cases where the standard secant requires careful initial point selection or when the function is nearly non-differentiable. It incorporates a small fixed h to approximate the via finite differences, yielding the iteration x_{n+1} = x_n - \frac{f(x_n) h}{f(x_n + h) - f(x_n)}. This formulation requires only one initial guess and the perturbation parameter, making it robust for functions where derivative computation is unstable or unavailable, though the choice of h (typically small, like $10^{-8}) influences and speed. The approach reduces counts compared to basic in some scenarios by improving approximation reliability. Steffensen's method serves as an acceleration for the secant method, achieving without by applying Aitken's to the iterates. Starting from an initial approximation x_0, it computes auxiliary points to estimate the error and refine the next iterate, effectively mimicking Newton's method's while using only evaluations. This variant enhances the secant method's efficiency for well-behaved functions, with the iteration involving three evaluations per step to produce a quadratically convergent sequence. Originally developed for , it has been adapted for root-finding to overcome the secant method's sub-quadratic rate. In optimization contexts, the inverse secant method applies the secant approximation to the , facilitating root-finding for inverse problems such as determining parameters where a attains a target value. This is particularly useful in algorithms or parameter estimation, where solving x = g(y) (with g as the inverse) avoids direct inversion and leverages secant updates for efficiency. The method maintains the derivative-free nature of the secant while adapting to optimization subproblems. Post-2020 developments include hybrid variants combining the method with other techniques for improved robustness, such as blending with trisection or false position methods to handle poor initial guesses and ensure . These hybrids, implemented in libraries like SciPy's optimize module, incorporate adaptive strategies for or step control to mitigate divergence in challenging cases. For instance, blended algorithms outperform standalone secant in reliability for transcendental equations. A notable limitation of these generalizations, particularly in multi-dimensional settings like , is the storage requirement for maintaining the full approximation, which scales as O(n^2) for n variables, potentially becoming prohibitive for large-scale systems without sparse adaptations. This contrasts with the O(1) storage of the univariate secant method.

Applications and Implementation

Numerical Example

To illustrate the secant method, consider finding the positive of the equation f(x) = x^2 - 2 = 0, which is \sqrt{2} \approx 1.414213562. Select initial approximations x_0 = 1 and x_1 = 2. The first iterate is computed using the secant : x_2 = x_1 - f(x_1) \frac{x_1 - x_0}{f(x_1) - f(x_0)} = 2 - 2 \cdot \frac{2 - 1}{2 - (-1)} = 2 - \frac{2}{3} = \frac{4}{3} \approx 1.3333. Subsequent iterates follow the same , replacing the prior two points. The table below shows the first five iterations, including the function values and absolute differences between consecutive approximations for monitoring . | n | x_n | f(x_n) | |x_n - x_{n-1}| | |----|-------------|--------------|---------------------| | 0 | 1.0000 | -1.0000 | - | | 1 | 2.0000 | 2.0000 | 1.0000 | | 2 | 1.3333 | -0.2222 | 0.6667 | | 3 | 1.4000 | -0.0400 | 0.0667 | | 4 | 1.4143 | 0.0001 | 0.0143 | | 5 | 1.4142 | 0.0000 | 0.0001 | The approximations converge to the \sqrt{2}, achieving an accuracy of $10^{-4} (error less than 0.0001 compared to the ) after four iterations. Geometrically, each step draws a between (x_{n-1}, f(x_{n-1})) and (x_n, f(x_n)), with the x-intercept becoming the next ; these secants successively bracket the more tightly near x \approx 1.4142. As an alternative example, apply the method to f(x) = \cos x - x = 0, whose root is the D \approx 0.7390851332. Using initial guesses x_0 = 0 and x_1 = 1: The first iterate is x_2 = 1 - f(1) \frac{1 - 0}{f(1) - f(0)} \approx 1 - (-0.4597) \cdot \frac{1}{ -0.4597 - 1 } \approx 0.6853. Continuing, the sequence is x_3 \approx 0.7348, x_4 \approx 0.7385, x_5 \approx 0.7390, converging to D within $10^{-4} after about five iterations, demonstrating the method's effectiveness for transcendental equations where the root lies between the initial guesses of opposite sign.

Practical Implementation Guidelines

When implementing the secant method in programming environments, use double-precision to minimize rounding errors, as single precision may lead to insufficient accuracy for functions with steep gradients or near-zero values. Additionally, prefer the iterative formula x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})} over the algebraically equivalent form x_{n+1} = \frac{x_{n-1} f(x_n) - x_n f(x_{n-1})}{f(x_n) - f(x_{n-1})}, as the former provides better near by damping updates when f(x_n) is small, even if the difference quotient is imprecise. To handle potential division by zero or near-zero denominators, implement a safeguard that checks if |f(x_n) - f(x_{n-1})| < \epsilon (e.g., \epsilon = 10^{-15}), in which case the iteration should terminate with a warning or failure message to avoid instability. If exact equality occurs due to floating-point limitations, a small perturbation can be applied to one iterate, such as shifting x_n by $10^{-6} |x_n|, before recomputing f(x_n), though this is typically combined with a maximum iteration limit to prevent infinite loops. For efficiency, especially when solving for multiple roots or evaluating over arrays, vectorize the function evaluations using libraries like to reduce overhead from repeated scalar calls. Integrate with established numerical libraries such as SciPy's root_scalar with method='secant', which handles initial and tolerance internally for robust performance. A simple Python implementation can be structured as follows, using a loop to update iterates until convergence:
python
import numpy as np

def secant(f, x0, x1, tol=1e-6, maxiter=100):
    """
    Secant method for root finding.
    :param f: function to solve f(x) = 0
    :param x0, x1: initial guesses
    :param tol: tolerance for convergence
    :param maxiter: maximum iterations
    :return: approximate root or None if failed
    """
    fx0 = f(x0)
    fx1 = f(x1)
    for i in range(maxiter):
        denom = fx1 - fx0
        if abs(denom) < 1e-15:
            # Safeguard for small denominator
            x1 += 1e-6 * abs(x1)  # Perturb
            fx1 = f(x1)
            denom = fx1 - fx0
            if abs(denom) < 1e-15:
                return None  # Fail
        x2 = x1 - fx1 * (x1 - x0) / denom
        fx2 = f(x2)
        if abs(fx2) < tol:
            return x2
        x0, fx0 = x1, fx1
        x1, fx1 = x2, fx2
    return None  # Max iterations reached
This code incorporates the preferred formula and a basic safeguard, with convergence checked on |f(x)|. Common pitfalls include overflow or underflow in function evaluations for functions defined over large domains, which can be mitigated by scaling the problem or using logarithmic representations where appropriate. Slow or erratic convergence often occurs if initial guesses x_0 and x_1 are far from the ; in such cases, employ a hybrid approach combining the with the for guaranteed bracketing, as in Brent's algorithm, which switches to when secant steps exit the interval. For testing, apply the implementation to examples from the Numerical Example section, monitoring the number of function evaluations as a —typically 5-10 for well-conditioned problems with good initials, compared to 20+ for alone. In modern environments as of 2025, the method can be implemented using Julia's Roots.jl package, which includes the secant solver for scalar s, or MATLAB's [fzero](/page/F-Zero) function, which employs a hybrid strategy combining secant, , and inverse quadratic interpolation for robust scalar root finding.

References

  1. [1]
    Secant Method - an overview | ScienceDirect Topics
    The secant method is similar to the Newton-Raphson method in that a straight line is used to determine the next approximation to the root.
  2. [2]
    [PDF] Chapter 03.05 Secant Method of Solving Nonlinear Equations
    Use the secant method of finding roots of equations to find the depth x to which the ball is submerged under water. Conduct three iterations to estimate the ...
  3. [3]
    Exact order of convergence of the secant method
    We study the exact order of convergence of the secant method when applied to the problem of finding a zero of a nonlinear function defined from R into R.
  4. [4]
    [PDF] Origin and Evolution of the Secant Method in One Dimension
    Feb 27, 2014 · The secant method traces back to the 18th-century B.C. Egyptian Rule of Double False Position, which is the secant method applied to a linear ...
  5. [5]
    [PDF] Numerical Analysis, 9th ed.
    ... Numerical Analysis,. Ninth Edition. Richard L. Burden and J. Douglas Faires. Editor-in-Chief: Michelle Julet. Publisher: Richard Stratton. Senior Sponsoring ...
  6. [6]
    [PDF] Lecture 40: Root Finding via the Secant Method
    The final root finding algorithm we consider is the secant method, a kind of quasi-Newton method based on an approximation of f0. It can be thought of as a ...
  7. [7]
    [PDF] Root-Finding Methods
    Secant method is regarded as best general purpose method. Ideal Method. Combine global reliability with fast local convergence while achieving optimal ...
  8. [8]
    Origin and Evolution of the Secant Method in One Dimension
    Aug 6, 2025 · Many in the mathematical community believe that the secant method arose from Newton's method using a finite difference approximation to the ...
  9. [9]
    Chapter 03.05: Secant Method of Solving Nonlinear Equations
    The secant method is an open method and may or may not converge. However, when secant method converges, it will typically converge faster than the bisection ...
  10. [10]
    Secant Method -- from Wolfram MathWorld
    A root-finding algorithm which assumes a function to be approximately linear in the region of interest. Each improvement is taken as the point where the ...<|separator|>
  11. [11]
    [PDF] Secant Method
    The idea underlying the secant method is the same as the one underlying Newton's method: to find an approximate zero of a function f(x) we find instead a ...
  12. [12]
    Newton-Raphson and Secant Methods
    Secant Method · Pick two initial values of x , close to the desired root. · As with the tangent line in Newton's method, produce the (secant) line through (x0, y0) ...Missing: analysis | Show results with:analysis
  13. [13]
    [PDF] Nonlinear Equations
    Note that for both the secant and the Newton method we can only guarantee convergence if the initial guesses are sufficiently close to the solution x∗. This ...
  14. [14]
    [PDF] THE ORDER OF CONVERGENCE FOR THE SECANT METHOD.
    THE ORDER OF CONVERGENCE FOR THE SECANT METHOD. Suppose that we are solving the equation f(x) = 0 using the secant method. Let the iterations. ( ...
  15. [15]
  16. [16]
    [PDF] Secant Method Error Bound
    Theorem (Secant Method1). Suppose f has 2 continuous derivatives on a neighborhood B of a root r. Set xn+1 = xn − f(xn)· xn −xn−1 f(xn)− f(xn−1) and let x0 ...
  17. [17]
    Nonlinear Equation Root Finding - John T. Foster
    Secant Method Examples. An animation showing the root of the function f(x) ... Pseudocode for Secant Method. To find a solution to f(x)=0 given initial ...
  18. [18]
    [PDF] Root Finding and Nonlinear Sets of Equations
    Jul 9, 2019 · 9.3 Van Wijngaarden-Dekker-Brent Method. While secant and false position formally converge faster than bisection, one finds in practice ...
  19. [19]
    [PDF] Roots of Nonlinear Equations
    ✓ Bisection and false position methods converge very slowly, but are certain to converge because the root lies in a closed domain. ✓ Newton's method and the ...
  20. [20]
    [PDF] A Class of Methods for Solving Nonlinear Simultaneous Equations ...
    This paper discusses certain modificatioins to Newton's method designed to re- duce the number of function evaluations required. Results of various ...
  21. [21]
    Modified Secant Method for Reduction in Number of Iterations - IIETA
    Modified Secant Method for Reduction in Number of Iterations ... Abstract: The present study considers an updated solution for an improved secant method. The ...
  22. [22]
    A Method to Accelerate the Convergence of the Secant Algorithm
    Nov 19, 2014 · The Secant algorithm is a textbook algorithm to find a numerical approximation of the root of a function f(x). A root α is a solution of the ...
  23. [23]
    [PDF] Zeros and Roots
    The secant method replaces the derivative evaluation in Newton's method with a ... Here is a little footnote to the history of numerical methods. The polynomial.
  24. [24]
    A Comparative Study among New Hybrid Root Finding Algorithms ...
    Jun 7, 2021 · In this paper, we propose a novel blended algorithm that has the advantages of the trisection method and the false position method.Missing: sources | Show results with:sources
  25. [25]
    Dottie Number -- from Wolfram MathWorld
    The Dottie number is the name given by Kaplan (2007) to the unique real root of cosx=x (namely, the unique real fixed point of the cosine function)Missing: source | Show results with:source
  26. [26]
    [PDF] Newton's Method in higher dimensions, secant method,fractals, Matlab
    Jun 16, 2013 · Secant Method. Two versions of this formula are (equivalent in exact math) xk+1 = xk − f(xk) xk − xk−1 f(xk) − f(xk−1). (?) and xk+1 = f ...
  27. [27]
    Topic 10.4: Secant Method
    If the denominator is zero, in which case the iteration process fails (division-by-zero) and we halt. If we have iterated some maximum number of times, say ...
  28. [28]
    The secant method
    \tag{6.2} \end{equation} $$ Inserting this expression for \( f'(x_n) \) in Newton's method simply gives us the secant method: $$ x_{n+1} = x_n - \frac{f(x_n)}{\ ...
  29. [29]
    root_scalar(method='secant') — SciPy v1.16.2 Manual
    Optimization and root finding ( scipy.optimize ); root_scalar(method='secant'). root_scalar(method='secant')#. scipy.optimize.root_scalar(f, args=(), method ...Missing: implementation | Show results with:implementation