Fact-checked by Grok 2 weeks ago

Householder's method

Householder's methods are a class of higher-order root-finding algorithms for solving nonlinear equations of the form f(x) = 0, where f is a function of one real variable with continuous derivatives up to order d + 1. These methods generate iterative sequences that converge to a root with order d + 1, offering faster convergence than Newton's method (order 2) for sufficiently smooth functions, at the cost of evaluating higher derivatives or equivalent computations. Introduced by Alston S. Householder in his 1970 book The Numerical Treatment of a Single Nonlinear Equation, the methods are derived using inverse or Padé approximants to the $1/f, extending classical techniques like (order 2) and Halley (order 3). The general iteration for the method of order d is given by x_{n+1} = x_n + (d) \frac{ (1/f(x_n))^{(d-1)} }{ (1/f(x_n))^{(d)} }, where superscripts denote derivatives, though practical implementations often use multipoint variants to avoid direct higher derivatives. For d = 1, it reduces to : x_{n+1} = x_n - f(x_n)/f'(x_n). Higher orders provide cubic (d=2), quartic (d=3), etc., , making them useful in for problems requiring rapid near a , such as in scientific and optimization, though they are less common in practice due to computation overhead.

Introduction

Definition and Purpose

Householder's method constitutes a family of iterative algorithms for locating of a nonlinear f(x) = 0, where f is a sufficiently defined on the real line. The method generates a sequence of approximations \{x_n\} via the update rule x_{n+1} = T_n(f, x_n), in which T_n denotes a rational constructed to yield convergence of order n+1 near a simple root of f. The primary purpose of Householder's method is to solve such scalar nonlinear equations with greater efficiency than traditional quadratic-convergent techniques, such as , which represents the second-order (n=1) special case. By leveraging higher-order approximations, it minimizes the number of function and derivative evaluations required to attain a specified , particularly beneficial when high accuracy is demanded in computational applications. While primarily formulated for single-variable equations, extensions to systems of nonlinear equations have been developed in the literature using adapted multivariate approaches. Central to the method's construction is its reliance on or continued fraction expansions of the $1/f to derive the order-n+1 T_n, ensuring asymptotic error reduction aligned with the desired rate.

Historical Background

Householder's method emerged from the rich tradition of iterative techniques for solving nonlinear equations, building on foundational work such as the , independently developed by around 1669 and formalized by in 1690. This second-order method provided a benchmark for , but its limitations in convergence speed prompted further innovations. Following , rapid advancements in electronic computing, including the development of machines like in 1945, catalyzed a surge in research aimed at exploiting higher-order methods for greater efficiency in solving complex equations. Alston S. Householder, an American mathematician specializing in , laid the groundwork for higher-order iterative methods during the 1950s while working at (ORNL), where he joined the Mathematics Division in 1946. His early contributions included explorations of polynomial iterations for algebraic roots, as detailed in his 1951 paper, which examined iterative schemes capable of accelerating convergence beyond quadratic rates. This research evolved into the comprehensive framework presented in his 1970 book, The Numerical Treatment of a Single Nonlinear Equation, which systematically introduced Householder's methods as a class of higher-order iterations derived from Padé approximants to the reciprocal function 1/f(x). Householder's tenure at ORNL, culminating in his role as director of the Mathematics Division from 1953 until his retirement in 1969, positioned him at the forefront of during the early computer era, where his work supported applications in physics and simulations. By the 1970s, 's methods gained prominence in the community, appearing in influential texts such as J. M. and W. C. Rheinboldt's Iterative Solution of Nonlinear Equations in Several Variables (1970), which extended and analyzed them for both scalar and systems contexts. Initially emphasizing practical implementations for orders 3 and 4 to balance computational cost and convergence speed, the methods were soon generalized to arbitrary orders, enabling tailored applications in high-precision computing.

Motivation

Limitations of Lower-Order Methods

The Gram-Schmidt process, a classical method for orthogonalizing a set of vectors to compute a , suffers from numerical instability due to the subtractive cancellation in the projection steps, particularly when the matrix columns are nearly linearly dependent. This can lead to loss of orthogonality in the computed , resulting in significant errors in subsequent applications like solving problems. The modified Gram-Schmidt algorithm addresses some of these issues by performing projections sequentially, improving stability over the classical version, but it still exhibits sensitivity to rounding errors in ill-conditioned matrices, where the error can grow as O(ε n), with ε the machine precision and n the dimension. Consequently, for high-precision computations or large-scale problems, these methods may require additional orthogonalization steps, increasing computational cost. In broader terms, traditional orthogonalization techniques based on successive projections allocate resources inefficiently for dense , as they do not systematically zero subdiagonal elements while preserving exact in finite precision. Householder's method provides a higher-order stable alternative by using reflections to introduce zeros in a single operation per column, ensuring better preservation of matrix norms and .

Approaches to Higher-Order Iteration

One primary conceptual path to developing Householder's method involves extending vector reflections to systematically triangularize the matrix, aligning with the goal of for and eigenvalue problems. This builds on the quadratic nature of reflections by incorporating geometric insights to achieve column-wise zeroing without intermediate loss of stability. A second key path utilizes the properties of transformations as elementary orthogonal matrices, which can be applied efficiently to submatrices, outperforming rotation-based methods like Givens in terms of operation count for dense cases. These transformations provide a compact way to represent the orthogonal factor Q as a product of few reflections, facilitating reconstruction with minimal storage. Both approaches share the goal of achieving a stable QR factorization beyond the limitations of projection-based methods, leveraging the and of reflections as foundational tools for analyzing numerical behavior while reducing the overhead of re-orthogonalization. In his paper, emphasized the reflection approach due to its inherent numerical stability and efficiency in practical implementations.

Formulation

General Householder Transformation

The general provides a framework for constructing higher-order iterative methods to solve the nonlinear equation f(x) = 0, where f is a sufficiently . The takes the form x_{n+1} = x_n + T_d(f, x_n), where T_d(f, x) denotes the d-th order applied at point x. This transformation is designed to approximate the step size that drives f(x + T_d) to zero with an error of order d+2 in the distance to the , thereby achieving a convergence order of d+1 for the overall method. Each requires d+1 evaluations of f and its derivatives up to order d, though practical implementations can avoid explicit computations. corresponds to the special case d=1, where T_1(f, x) = -f(x)/f'(x). The explicit formula for the transformation is T_d(f, x) = d \frac{ \left( \frac{1}{f} \right)^{(d-1)}(x) }{ \left( \frac{1}{f} \right)^{(d)}(x) }, where \left( \frac{1}{f} \right)^{(k)} denotes the k-th of the reciprocal function $1/f. This expression arises from considering the Taylor expansion of the inverse mapping near the and selecting the step to match higher-order terms. Near a simple \xi, T_d(f, x) \approx -f(x)/f'(x) to first order, but the higher derivatives in the formula cancel additional terms up to order d+1, enhancing the local accuracy. To compute T_d without higher derivatives, the transformation can be expressed using of f. Specifically, the form allows for in implementation, as divided differences approximate derivatives recursively with function values at distinct points. This divided-difference form uses exactly d+1 function evaluations per step.

Specific Methods by Order

Householder's methods of specific orders provide concrete iterations for root-finding, building on the general transformation to achieve higher rates with increasing requirements. The second-order variant (order 2 ) is equivalent to , defined by the iteration x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}, which requires two function evaluations per step: one for f(x_n) and one for the first f'(x_n). The third-order method (order 3 convergence), often referred to as in this context, takes the form x_{n+1} = x_n - \frac{f(x_n)/f'(x_n)}{1 - \frac{1}{2} \frac{f(x_n) f''(x_n)}{[f'(x_n)]^2}}, necessitating three evaluations, including the second f''(x_n), either computed analytically or approximated via finite differences. This formula arises from applying the general with d=2, enhancing quadratic convergence to cubic while maintaining efficiency for smooth functions. For the fourth-order method (order 4 convergence, d=3), the iteration is given by the explicit formula x_{n+1} = x_n - \frac{6 f f'^2 - 3 f^2 f''}{6 f'^3 - 6 f f' f'' + f^2 f'''}, where f = f(x_n), f' = f'(x_n), f'' = f''(x_n), and f''' = f'''(x_n). This requires four evaluations in total, involving the third derivative, and can be implemented using divided differences for the higher-order terms. Practical use often employs symbolic or numerical differentiation. To implement these methods without explicit derivatives, finite differences can approximate the required derivatives; for an order-d+1 variant, this typically demands d+1 function evaluations per iteration to estimate the d-th divided difference accurately. For instance, the third-order method uses three points to approximate f'', while the fourth-order requires four points for the third divided difference. In practice, Householder methods of order greater than 4 are often unstable due to amplification of rounding errors in higher-derivative approximations or large convergence constants, limiting common usage to orders 3 and 4 despite theoretical higher efficiency.

Derivation

Taylor Series Foundation

Householder's method for root finding builds upon the Taylor series expansion of a nonlinear function f around a root \alpha, where f(\alpha) = 0. Assuming f is sufficiently differentiable, the expansion is f(x) = f'(\alpha)(x - \alpha) + \frac{1}{2!} f''(\alpha)(x - \alpha)^2 + \cdots + \frac{1}{d!} f^{(d)}(\alpha)(x - \alpha)^d + O((x - \alpha)^{d+1}), with f'(\alpha) \neq 0 for a simple root. This series captures the local behavior near \alpha, where the error term reflects higher-order contributions that diminish as x approaches \alpha. The expansion highlights how deviations from the root scale with powers of the error e = x - \alpha, providing the foundation for analyzing convergence rates in iterative methods. In inverse iteration schemes, the objective is to determine a correction step h(x) such that f(x + h(x)) = 0, with the accurate to order d+1 based on the truncated at degree d. Substituting the expansion of f(x + h) around x and setting it to zero yields a equation in h of degree d+1, incorporating terms from f(x), f'(x), up to f^{(d+1)}(x). Solving this equation ensures the next iterate x + h(x) aligns with the root up to the desired order, reducing the by factoring out lower-order terms in the series. This approach underpins higher-order methods by enforcing asymptotic reduction proportional to e^{d+1}. Consider the iteration function g(x) = x + h(x), which has \alpha as a fixed point since g(\alpha) = \alpha. For of order d+1, the derivatives must satisfy g'(\alpha) = g''(\alpha) = \cdots = g^{(d)}(\alpha) = 0, while g^{(d+1)}(\alpha) \neq 0. These conditions, derived by differentiating the fixed-point equation and evaluating at \alpha using the of f, ensure that the error e_{n+1} satisfies e_{n+1} = C e_n^{d+1} + O(e_n^{d+2}) for some constant C \neq 0, accelerating beyond quadratic rates. Polynomial-based inverses of the for h succeed for low orders—yielding at order 2—but fail beyond order 2, as inverting a cubic or higher in h does not produce a expression solely in terms of f and its lower derivatives without residual lower-order errors. This breakdown necessitates rational function forms to achieve higher orders while maintaining computational feasibility. Padé approximants offer a refinement by providing rational approximations superior to truncated Taylor polynomials for such inversions.

Connection to Padé Approximants

Padé approximants are rational functions of the form [m/n], where the numerator is a of degree at most m and the denominator is a of degree at most n, constructed to match the expansion of a given up to order m + n. These approximants often provide superior accuracy compared to polynomial approximations, particularly for functions like $1/f near the roots of f, where poles or singularities may affect . In the context of root-finding, Padé approximants excel because they can capture the behavior of the inverse function more effectively than truncated , avoiding issues like divergence outside the . Householder's method leverages Padé approximants by deriving higher-order iterations from continued fraction expansions of the inverse function $1/f, specifically using approximants of order d+1 with a linear numerator. This approach transforms the root-finding problem into approximating the step toward the root via a rational function that matches the Taylor series of $1/f at the current iterate x. The derivation starts from the Taylor expansion of $1/f around x, equated to the inverse of the Padé approximant form \frac{a_0 + h}{b_0 + b_1 h + \cdots + b_{d-1} h^{d-1}}, leading to the iteration step h = d \frac{ \left( \frac{1}{f} \right)^{(d-1)}(x) }{ \left( \frac{1}{f} \right)^{(d)}(x) }, or x_{n+1} = x_n + d \frac{ \left( \frac{1}{f} \right)^{(d-1)}(x_n) }{ \left( \frac{1}{f} \right)^{(d)}(x_n) }. This rational approximation yields an exact convergence order of d+1 for the resulting Householder method of order d, provided that the derivatives f^{(k)}(\alpha) \neq 0 for k = 1, \dots, d at the simple \alpha. Unlike pure methods, which are limited by their inability to model poles in $1/f, the Padé-based structure allows for higher-order accuracy without requiring excessive computations in practice. This connection underscores the method's efficiency for nonlinear equations where higher-order is desired over multiple low-order steps.

Applications and Analysis

Numerical Example

To illustrate Householder's method for QR factorization, consider the 3×3 matrix A = \begin{pmatrix} 2 & -1 & -2 \\ -4 & 6 & 3 \\ -4 & -2 & 8 \end{pmatrix}. The algorithm applies a sequence of Householder reflections to transform A into upper triangular form R, while accumulating the orthogonal matrix Q. In the first step, focus on the first column x = \begin{pmatrix} 2 \\ -4 \\ -4 \end{pmatrix}, with \|x\|_2 = 6. To enhance stability, choose the reflector vector as v = x - \operatorname{sign}(x_1) \|x\|_2 e_1 = \begin{pmatrix} 2 \\ -4 \\ -4 \end{pmatrix} - 6 \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} -4 \\ -4 \\ -4 \end{pmatrix}, then normalize u = v / \|v\|_2. The Householder matrix H_1 = I - 2 u u^T is applied to the submatrix A(1:3,1:3), zeroing the subdiagonal entries in the first column. Subsequent steps apply similar reflections to the trailing 2×2 and 1×1 submatrices, yielding R \approx \begin{pmatrix} -6 & 2 & -1 \\ 0 & 4 & 4 \\ 0 & 0 & 6 \end{pmatrix}, and Q as the product ( Q = H_1 H_2 H_3 \approx \begin{pmatrix} 0.3333 & -0.6667 & -0.6667 \ -0.6667 & 0.3333 & 0.6667 \ -0.6667 & 0.6667 & 0.3333 \end{pmatrix} ), such that A = QR. This example demonstrates how each zeros one subdiagonal entry while preserving . For comparison, applying classical Gram-Schmidt to the same A may introduce errors due to , whereas Householder's method maintains better numerical accuracy through unitary transformations.

Convergence Properties

Householder's method for QR is a direct that computes the decomposition in a finite number of steps, specifically O(m n^2) operations, without iterative . However, its numerical properties are analyzed in terms of under finite-precision arithmetic. The method is backward stable, meaning the computed factors \tilde{Q} and \tilde{R} satisfy A + \Delta A = \tilde{Q} \tilde{R} with \|\Delta A\| small relative to \|A\|, typically bounded by times \|A\| times a modest . This stability arises from the orthogonal nature of Householder transformations, which preserve the Euclidean norm and prevent error amplification, unlike non-orthogonal methods such as classical Gram-Schmidt. For ill-conditioned matrices, the computed R remains well-conditioned if A is, and Q stays nearly orthogonal with \|I - \tilde{Q}^T \tilde{Q}\| = O(\epsilon). The choice of sign in the reflector vector further avoids cancellation errors near the diagonal. Householder QR is particularly effective for applications requiring high precision, such as solving overdetermined linear systems A x = b via x = R^{-1} (Q^T b), where it outperforms LU-based methods in stability for problems, and as a building block in the for computing eigenvalues of symmetric matrices. Higher-dimensional or sparse variants may require adaptations, but the core method remains robust up to machine precision.

Comparisons

Relation to Newton's Method

Householder's method serves as a generalization of for finding roots of nonlinear scalar equations, where emerges as the specific case for order d=1. In this instance, the Householder transformation simplifies to T_1(f, x) = -\frac{f(x)}{f'(x)}, which precisely matches the Newton step, yielding the iteration x_{n+1} = x_n + T_1(f, x_n). This generalization embeds successive Newton-like steps into a unified higher-order transformation through composition, enabling convergence of order d+1 while building directly on the quadratic foundation of Newton's approach. Householder's 1970 formulation explicitly extends Newton's method for nonlinear scalar equations, deriving higher-order variants from asymptotic expansions of the inverse function. Both methods share the structure of fixed-point iterations of the form g(x) = x - \frac{f(x)}{\phi(x)}, where \phi(x) approximates f'(x); for , \phi(x) = f'(x) exactly, whereas employs a higher-order rational to \phi(x) via Padé-like forms derived from of $1/f. Regarding , requires two evaluations per step (one of f and one of f') for quadratic , while a of d demands d+1 evaluations (of f and its first d ) for d+1, providing superior asymptotic for d > 1 when are accessible.

Differences from Secant and Other Methods

Householder's methods achieve higher of compared to the , which has an of approximately 1.618 and requires only one new evaluation per after the initial two points. In contrast, a Householder of d+1 typically demands evaluations of the and its derivatives up to d, or equivalent approximations, resulting in more computational effort per step but faster local for smooth functions where such information is accessible. This trade-off favors Householder's methods when high precision is needed near the , as the increased can reduce the total number of iterations despite the higher cost per . Compared to , which is a specific third-order (d=2) instance of the Householder family requiring explicit evaluation of the , the general Householder approach offers broader applicability through its parameterized form that accommodates higher orders without always needing explicit higher derivatives—instead relying on recursive or difference-based approximations for generality. Halley's method coincides exactly with Householder's for d=2, but the family's extension to higher d provides superior asymptotic performance for problems benefiting from orders beyond three, though at the expense of increased complexity in derivative handling. Householder's methods stand out for their optimality in evaluation complexity among single-point, derivative-inclusive techniques for local convergence, surpassing alternatives like the Chebyshev method—which also achieves third-order convergence but through a different rational that may exhibit larger basins of attraction in certain nonlinear settings. For instance, while the Chebyshev method modifies Newton's iteration with a specific correction term involving the second , Householder's parameterized structure allows tuning for higher orders, potentially yielding better efficiency indices (order to the power of 1 over evaluations) in smooth, differentiable environments. A key distinction lies in Householder's reliance on (or their approximations via finite differences), unlike the purely derivative-free , which avoids computation altogether and thus remains more robust in scenarios with unavailable or unreliable information. Multipoint extensions, such as Ostrowski's fourth-order method, build on Householder-inspired principles by distributing function evaluations across multiple points to achieve higher orders without , enhancing efficiency for derivative-free contexts. In the presence of noisy data, derivative-free variants of Householder's methods—constructed via finite differences—are less commonly employed than the , as noise amplification in derivative approximations can degrade , whereas secant's direct use of values preserves .

References

  1. [1]
    [PDF] Applied Numerical Linear Algebra. Lecture 8
    QR decomposition using Householder reflections. We can use Householder reflections to calculate the QR factorization of an m-by-n matrix A with m ≥ n. Let x ...
  2. [2]
    [PDF] Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD
    How do we compute the QR Factorization? There are three main methods. ▷ Gram-Schmidt Orthogonalization. ▷ Householder Triangularization. ▷ Givens Rotations.
  3. [3]
    The numerical treatment of a single nonlinear equation
    Semantic Scholar extracted view of "The numerical treatment of a single nonlinear equation" by A. Householder.
  4. [4]
    [PDF] householder's approximants and continued fraction expansion of ...
    Nov 5, 2012 · Continued fractions, Householder's iterative methods. 231. Page 2. 232. V. PETRICEVIC derivative. Householder's method of order p consists of a ...
  5. [5]
    A convergent and stable fourth-order iterative procedure based on ...
    Householder's method is a higher-order convergent root-finding algorithm ... function evaluations, raising computational costs that may outweigh ...
  6. [6]
    [PDF] Historical Development of the Newton-Raphson Method
    This expository paper traces the development of the Newton-Raphson method for solving nonlinear algebraic equations through the extant notes, letters, and ...
  7. [7]
    The Scientific and Technological Advances of World War II
    One such example was the Electronic Numerical Integrator and Computer (ENIAC), one of the first general purpose computers. Capable of performing thousands of ...
  8. [8]
    The History of Numerical Analysis and Scientific Computing
    Modern numerical analysis can be credibly said to begin with the 1947 paper by John von Neumann and Herman Goldstine, "Numerical Inverting of Matrices of High ...
  9. [9]
    Alston Householder (1904 - 1993) - Biography - MacTutor
    Householder transformations are now routinely taught in courses in linear algebra, throughout the world, as is the systematic use of norms in linear algebra, ...Missing: original | Show results with:original
  10. [10]
    Proceedings of the American Mathematical Society - AMS
    Polynomial iterations to roots of algebraic equations. HTML articles powered by AMS MathViewer. by Alston S. Householder: Proc. Amer. Math. Soc. 2 (1951) ...
  11. [11]
    The Numerical Treatment of a Single Nonlinear Equation
    The Numerical Treatment of a Single Nonlinear Equation. Front Cover. Alston Scott Householder. McGraw-Hill, 1970 - Mathematics - 216 pages. From inside the book ...
  12. [12]
    Solving Nonlinear Equations with Newton's Method
    This brief book on Newton's method is a user-oriented guide to algorithms and implementation. In just over 100 pages, it shows, via algorithms in pseudocode, ...Missing: limitations | Show results with:limitations
  13. [13]
    [PDF] Numerical Methods - hlevkin
    We now find the order of convergence for Newton's Method and for the Secant. Method. 2.4.1 Newton's Method. We start with Newton's Method xn+1 = xn − f(xn).Missing: limitations | Show results with:limitations
  14. [14]
    [PDF] A Bibliography of Publications of Alston Scott Householder - The Netlib
    Multidimen- sional Householder based high- speed QR decomposition ar- chitecture for MIMO receivers. In 2013 IEEE International. Symposium on Circuits and.
  15. [15]
    [PDF] Solving Scalar Nonlinear Equations Atkinson Chapter 2, Stoer ...
    If g0(α) = 0 then the iteration will converge (since 0 < 1), but how fast? In the above analysis just use a higher-order Taylor expansion. Theorem (Atkinson 2.8) ...
  16. [16]
    [PDF] Industry-grade function approximation - jaeckel.org
    Householder's method. Householder's method [Hou70; Wik19] is designed as an iterative procedure to solve f(x)=0 for x via xn+1 = xn + HHd(xn). (3.8) with. HHd ...
  17. [17]
  18. [18]
    Halley's Method -- from Wolfram MathWorld
    Halley's method is a root-finding algorithm also known as the tangent hyperbolas method or Halley's rational formula. As in Halley's irrational formula, ...
  19. [19]
    Newton's Method -- from Wolfram MathWorld
    Newton's method, also called the Newton-Raphson method, is a root-finding algorithm that uses the first few terms of the Taylor series of a function f(x)<|control11|><|separator|>
  20. [20]
    Modified Householder iterative method free from second derivatives ...
    Jul 15, 2007 · In this paper, we suggest and analyze a new two-step predictor–corrector type iterative method free from second derivatives for solving ...
  21. [21]
    An analysis of the properties of the variants of Newton's method with ...
    These variants of Newton's method have been proved to converge locally with third order of convergence assuming that the starting point is close to the root.Missing: alternatives | Show results with:alternatives
  22. [22]
    Modification of Newton-Househölder Method for Determining ... - MDPI
    Eighth order family of iterative methods for nonlinear equations and their basins of attraction. ... Modified Householder's method (MHHM) for solving nonlinear ...
  23. [23]
    Householder's Method -- from Wolfram MathWorld
    A root-finding algorithm based on the iteration formula x_(n+1)=x_n-(f(x_n))/(f^'(x_n)){1+(f(x_n)f^('')(x_n))/(2[f^'(x_n)]^2)}. This method, like Newton's ...
  24. [24]
    (PDF) New Optimal Newton-Householder Methods for Solving ...
    Aug 9, 2025 · constructed two new optimal Newton-Householder methods to find the. simple roots of nonlinear equations. Based on theoretical analysis and ...<|control11|><|separator|>
  25. [25]
    The W4 method: A new multi-dimensional root-finding scheme for ...
    We propose a new class of method for solving nonlinear systems of equations, which, among other things, has four nice features.
  26. [26]
    Higher order Newton methods for root finding by Householder
    Oct 30, 2019 · There are variations on Newton's root finding method that use higher derivatives and converge faster. Alston Householder developed a sequence of such methods.
  27. [27]
    [PDF] Higher-Order Root-Finding Algorithm and its Applications - arXiv
    Sep 25, 2025 · We will study various ways to approximate the q-ary entropy function, and compare the performances of root-finding methods with these guesses.
  28. [28]
    Chebyshev polynomials involved in the Householder's method for ...
    Dec 11, 2024 · The Householder's method is a root-find algorithm which is a natural extension of the methods of Newton and Halley. The current paper mostly ...
  29. [29]
    An improvement to Ostrowski root-finding method - ResearchGate
    Aug 6, 2025 · An improvement to the iterative method based on the Ostrowski one to compute nonlinear equation solutions, which increases the local order ...