Fact-checked by Grok 2 weeks ago

Richardson extrapolation

Richardson extrapolation is a technique in used to enhance the accuracy of approximate solutions to mathematical problems by systematically eliminating leading-order error terms through the combination of results computed at multiple step sizes. Developed by mathematician and physicist in his 1911 paper on solving differential equations via finite differences, the method assumes that an approximation T(h) to a true value T(0) can be expressed as T(h) = T(0) + c_p h^p + c_{p+1} h^{p+1} + \cdots, where p is the of the leading error term and c_p is a constant. The core idea involves computing approximations at step sizes h and kh (often k=2), then forming a that cancels the h^[p](/page/P′′) term, resulting in an improved estimate of p+1 or higher. For instance, the extrapolated value is given by T^*(h) = \frac{k^[p](/page/P′′) T(h) - T(kh)}{k^[p](/page/P′′) - 1}, which removes the dominant error assuming the expansion holds. This process can be iterated recursively to generate even higher-order approximations, forming a of values where each entry refines the previous ones. The technique is particularly effective when the underlying has a known asymptotic error expansion, and it requires no additional assumptions about the problem beyond the existence of such an expansion. Richardson extrapolation finds wide application in various numerical methods, including , where it upgrades central difference formulas from second-order to fourth-order accuracy; , serving as the foundation for Romberg integration, which achieves exponential convergence for smooth integrands; and the solution of and partial equations, improving schemes. Its efficiency stems from reusing prior computations in the recursive scheme, though it is sensitive to round-off errors in , necessitating careful choice of step sizes to balance and effects. Despite these limitations, the method remains a cornerstone of due to its simplicity and power in accelerating convergence without modifying the base .

Fundamentals

Definition and Purpose

Richardson extrapolation is a sequence acceleration method that combines numerical approximations obtained at different step sizes to eliminate leading-order error terms and attain higher-order accuracy in estimating a value. It is named after , who introduced the technique in 1911 while developing methods for the approximate arithmetical solution of physical problems involving differential equations, with an application to the stresses in a . The primary purpose of Richardson extrapolation is to enhance the precision of numerical approximations in methods like finite differences, where errors decrease asymptotically with smaller step sizes, by effectively extrapolating toward the exact solution as the step size approaches zero. This approach achieves improved accuracy without necessitating additional evaluations of the underlying function beyond the initial approximations at selected step sizes, making it computationally efficient for refining results in numerical analysis. At its core, Richardson extrapolation relies on the assumption that the admits an in powers of the step size h. For instance, consider an A(h) to a true value L, expressed as A(h) = L + c h^k + O(h^{k+1}), where c and k > 0 are constants, and higher-order terms are of order O(h^{k+1}). By computing approximations at step sizes h and h/2 (or multiples thereof) and linearly combining them, the leading h^k error term can be canceled, yielding an improved estimate with error dominated by the next higher-order term. This process systematically boosts the order of , provided the asymptotic regime is reached where higher-order terms are negligible.

Historical Background

Richardson extrapolation originated in the early through the work of British mathematician and physicist , who developed the technique to improve the accuracy of approximations in solving partial differential equations. In 1911, Richardson introduced the method in a seminal paper addressing geophysical computations, particularly the numerical solution of for determining stresses in a masonry dam using irregular grids. This approach allowed him to refine coarse approximations by extrapolating toward the limit as the grid size approached zero, effectively eliminating leading-order truncation errors without requiring finer meshes. During the 1920s, Richardson extended these ideas to practical applications in and structural stress analysis. In his 1922 book Weather Prediction by Numerical Process, he employed methods to compute atmospheric tendencies, aiming to forecast weather patterns through direct numerical integration of hydrodynamic equations. This work represented an early attempt at systematic numerical of complex geophysical phenomena, where such methods were applied on limited computational resources. Concurrently, Richardson applied similar refinement techniques to stress analysis in structures, building on his 1911 foundations to handle irregular boundaries and non-uniform grids. The method gained formal recognition in 1927 when Richardson, collaborating with J. Arthur Gaunt, published "The Deferred Approach to the Limit," which generalized the extrapolation process for sequences derived from lattice-based discretizations of differential equations. This paper articulated the deferred correction strategy, emphasizing its utility in iteratively approaching exact solutions by postponing higher-order terms. Following a period of limited adoption, Richardson extrapolation experienced a revival and formalization in the mid-20th century as matured alongside electronic computing. In their 1957 book Methods of Numerical Integration, Philip J. Davis and Philip Rabinowitz popularized the technique by integrating it into broader frameworks for sequence acceleration and error estimation in rules. They linked it explicitly to of asymptotic expansions, making it accessible for general problems and highlighting its role in improving rates. A key milestone in the 1950s came with Werner Romberg's 1955 development of an iterative scheme for based on repeated Richardson extrapolation applied to the , which achieved higher-order accuracy efficiently. This connection, further disseminated in the through computational implementations and textbooks, embedded the method within standard numerical toolkits, influencing adaptive algorithms for solving integral equations. Throughout its evolution, Richardson extrapolation has played a pivotal role in accelerating the convergence of iterative methods in , predating and inspiring modern adaptive techniques by providing a systematic way to combine multiple for enhanced precision without proportional increases in computational cost.

Theoretical Framework

Notation

In Richardson extrapolation, the approximation obtained from a with step size h is denoted by A(h), which converges to the true value L as h approaches zero. The sequence of step sizes is typically defined as h_m = h / b^m for nonnegative integers m \geq 0, where b > 1 is the extrapolation base, commonly taken as b = 2 to halve the step size iteratively. The error in the approximation admits an asymptotic expansion of the form A(h) = L + \sum_{k=1}^\infty c_k h^{p k}, where p > 0 is the order of the underlying method and the coefficients c_k are constants independent of h. This form assumes the error terms appear at orders that are integer multiples of p, as is typical in certain numerical methods such as the trapezoidal rule for integration. The following table summarizes the principal symbols employed:
SymbolDescription
hStep size
T_{m,k}Extrapolated at level m and k
b base (typically b=2)
These symbols facilitate the of the method's iterative refinement. A key convention in presenting Richardson extrapolation is the use of a triangular , or tableau, to organize computations. In this structure, rows correspond to decreasing step sizes h_m, while columns represent increasing orders of extrapolation, with entries T_{m,k} computed recursively to cancel lower-order error terms.

General Formula

The general formula for Richardson extrapolation arises from the asymptotic error expansion of a numerical approximation A(h) to a limiting value L = \lim_{h \to 0} A(h), typically expressed as A(h) = L + c h^p + O(h^{2p}), where p > 0 is the known of the leading term and c is a constant. To eliminate the h^p term, consider approximations at step sizes h and h/b (with b > 1), yielding A(h) = L + c h^p + O(h^{2p}) and A(h/b) = L + c (h/b)^p + O(h^{2p}). Multiplying the second equation by b^p gives b^p A(h/b) = b^p L + c h^p + O(h^{2p}). Subtracting the first equation from this scaled version cancels the c h^p terms: b^p A(h/b) - A(h) = (b^p - 1) L + O(h^{2p}). Solving for L produces the extrapolated approximation T_0 = \frac{b^p A(h/b) - A(h)}{b^p - 1} = L + O(h^{2p}), which achieves an of $2p. This formula, introduced by in his 1911 paper on solutions to equations, assumes prior knowledge of the method's p and the step ratio b, often chosen as an like 2 for computational convenience. For higher-order extrapolations, the process iterates on successively refined approximations. Define T_{m,k} as the k-th extrapolation at level m. The recursive is T_{m,k} = \frac{b^{p k} T_{m,k-1} - T_{m-1,k-1}}{b^{p k} - 1}, with base cases T_{m,0} = A(h / b^m) for m = 0, 1, \dots. This yields T_{m,m} = L + O(h^{p(m+1)}), progressively eliminating higher- terms through repeated application, provided the holds up to the desired . The of known p and fixed b ensures the coefficients align correctly for cancellation, though variations exist for unknown or variable orders in advanced extensions.

Recurrence Relation

The for Richardson extrapolation provides a systematic way to construct higher- approximations by iteratively combining prior estimates in a tableau. In standard notation, the entries T_{m,k} of the tableau satisfy the relation T_{m,k} = T_{m,k-1} + \frac{T_{m,k-1} - T_{m-1,k-1}}{b^{p k} - 1}, where m indexes the row (corresponding to step size level), k indexes the column ( ), b > 1 is the refinement factor (often b = 2), and p is the of the . This is derived from differencing consecutive levels to cancel the leading error term, assuming the error expansion A(h) = A + c_p h^p + c_{p+1} h^{p+1} + \cdots. The tableau is constructed starting with the first column, where each entry is the base approximation at successively refined step sizes: T_{m,0} = A(h / b^m) for m = 0, 1, \dots, n-1. Subsequent columns are then filled recursively from left to right and top to bottom using the recurrence relation, with each T_{m,k} depending only on the immediately preceding entries in the same row and the diagonal above. This process builds a lower triangular array, where the diagonal entries T_{m,m} provide the highest-order extrapolations at each level. For n distinct step sizes, the recurrence enables computation of the full tableau in O(n^2) operations, yielding approximations up to order p + n - 1 along the anti-diagonal. Regarding propagation, the recurrence eliminates the h^p term at the first extrapolation level (k=1), the h^{p+1} term at the second level (k=2), and so on, with the k-th column entry T_{m,k} having O(h^{p+k}) under suitable assumptions on the underlying function. This stepwise cancellation leverages the known powers in the asymptotic to progressively refine accuracy without additional base evaluations beyond the initial column.

Properties

Richardson extrapolation converges to the true limit L under the assumption that the approximation possesses a complete in powers h^p, h^{2p}, h^{3p}, \dots of the step size h, as the step sizes are successively refined toward zero; the increases along the diagonals of the extrapolation tableau, achieving superlinear rates under suitable conditions on the expansion coefficients. The facilitates this diagonal order enhancement by systematically eliminating leading error terms. The method exhibits bounded for the sequences in its tableau columns and diagonals when the underlying approximations satisfy the asymptotic assumptions, ensuring that perturbations do not amplify uncontrollably in the limit processes. However, practical stability is compromised at higher extrapolation levels, where sensitivity to errors intensifies due to the ill-conditioned of the computations, arising from subtractions of closely valued terms that magnify floating-point inaccuracies. Richardson extrapolation is particularly effective and optimal when the leading order p is known a priori, allowing precise cancellation of the dominant term; it underperforms or fails entirely if the lacks the required , as occurs with non-smooth functions where higher-order terms vanish or do not conform to the power series form. Key limitations stem from the necessity of step sizes forming a on the —typically h_i = h_0 b^{-i} for b > 1—to align with the powers in the ; deviations disrupt the cancellation process. Computationally, building the full tableau incurs a cost in the number of refinement levels, as each new level requires reevaluating all prior approximations. Furthermore, the magnification factor for perturbations at level k approximates b^{p k}, resulting in of effects with deeper .

Computational Aspects

Step-by-Step Process

The application of Richardson extrapolation follows a structured algorithm that begins with selecting an appropriate base and proceeds through iterative refinement to produce higher-order approximations. First, identify the base , such as a scheme, and determine its order p, which represents the leading term in the asymptotic ; this order is typically known from the theoretical . Next, choose a refinement factor b, commonly 2 for simplicity and computational efficiency, and decide on the number of levels n based on desired accuracy and available resources; larger n allows for more steps but increases computational cost, so it is selected by estimating rates or monitoring reduction in preliminary runs. Compute the initial approximations by evaluating the base method at successively refined parameters: start with step size h to obtain A(h), then A(h/b), A(h/b^2), up to A(h/b^{n-1}); these values form the first column of the extrapolation tableau, a triangular where rows correspond to the refinement levels. To build the tableau, apply the row by row, using pairs of entries from the previous column to compute entries in the next column, thereby eliminating successive error terms and achieving higher-order accuracy in each subsequent column. The process fills the tableau diagonally, with each new row incorporating a finer and each new column providing an improved estimate. If the error order p is unknown, estimate it iteratively by comparing differences between consecutive approximations and adjusting the refinement weights accordingly, often starting with an assumed value and refining based on observed behavior. Once the tableau is complete, select the improved estimate from the diagonal elements (for the highest order at the finest level) or the bottom-right corner, depending on the desired balance of order and refinement. To ensure reliability, verify the underlying assumptions of the asymptotic expansion throughout the process, such as by plotting residuals between base and extrapolated approximations to confirm that errors diminish as expected with refinement; deviations may indicate invalid assumptions like non-dominant higher-order terms or . checks, drawing from the 's , can be integrated by monitoring for oscillatory behavior in the tableau entries during construction. The overall can be visualized as a : begin with input of the , p, factor b, and level count n; branch to compute the sequence of approximations A(h/b^k) for k = 0 to n-1; proceed to initialize and populate the tableau row by row via recurrence; converge to output the selected estimate, followed by a for and potential adjustment.

Pseudocode Example

A pseudocode implementation of Richardson extrapolation typically constructs a tableau to iteratively refine a sequence of base approximations using the , assuming the approximations are ordered from coarsest to finest step size. The input consists of a list of base approximations A = [A_0, A_1, \dots, A_{n-1}], where A_i is computed with step size h / 2^i, and the known p (e.g., p=2 for ). The output is the highest-order extrapolated estimate T_{n-1, n-1}. The algorithm initializes the first column of the tableau with the input approximations and then fills subsequent columns using the general recurrence. This process builds higher-order accurate estimates diagonally across the tableau.
function richardson_extrapolation(A_list, p):
    n = length(A_list)
    if n == 1:
        return A_list[0]  // No extrapolation possible
    if p is unknown or invalid:
        error("Error order p must be specified")
    
    T = 2D array of size n x n  // Initialize tableau
    for i = 0 to n-1:
        T[i][0] = A_list[i]  // Base approximations in first column
    
    for k = 1 to n-1:
        for m = k to n-1:
            factor = 2^(p * k)
            T[m][k] = (factor * T[m][k-1] - T[m-1][k-1]) / (factor - 1)
    
    return T[n-1][n-1]  // Highest-order estimate
This translates the step-by-step process into a programmatic form, suitable for languages like or . The time and is O(n^2), as it involves n(n+1)/2 recurrence computations. For efficiency in array-based languages, vectorized implementations can compute entire columns simultaneously using , reducing loop overhead.

Applications and Examples

Simple Numerical Example

A simple numerical example of Richardson extrapolation involves approximating \sin(1) \approx 0.84147098 using the identity \sin(1) = \frac{\sin(1+h) + \sin(1-h)}{2 \cos h}, where \cos h is approximated by its truncated after the h^2 term: \cos h \approx 1 - \frac{h^2}{2}. This truncation yields an approximation A(h) = \frac{\sin(1+h) + \sin(1-h)}{2 \left(1 - \frac{h^2}{2}\right)}, with leading error O(h^4) arising from the omitted h^4/24 term in the cosine expansion. Computations are performed for h = 0.1, h = 0.05, and h = 0.025, assuming evaluation of the sine terms:
  • A(0.1) = 0.84147449
  • A(0.05) = 0.84147120
  • A(0.025) = 0.84147099
These form the zeroth column of the Richardson tableau, denoted T_{j,0} = A(h/2^j) for j = 0, 1, 2, where the initial step size is h = 0.1. The tableau is built level by level using the general formula T_{j,k} = \frac{2^{p k} T_{j+1,k-1} - T_{j,k-1}}{2^{p k} - 1}, with p = 4. The first column (k=1) eliminates the O(h^4) error term: T_{0,1} = \frac{16 \cdot A(0.05) - A(0.1)}{15} = \frac{16 \cdot 0.84147120 - 0.84147449}{15} = 0.84147098, T_{1,1} = \frac{16 \cdot A(0.025) - A(0.05)}{15} = \frac{16 \cdot 0.84147099 - 0.84147120}{15} = 0.84147098. This level improves accuracy to O(h^8). The second column (k=2) further cancels the O(h^8) term: T_{0,2} = \frac{256 \cdot T_{1,1} - T_{0,1}}{255} = \frac{256 \cdot 0.84147098 - 0.84147098}{255} = 0.84147098. For completeness, the full 3×3 tableau (rounded to eight decimal places) is
j \setminus k012
00.841474490.841470980.84147098
10.841471200.84147098
20.84147099
The entry T_{2,2} is not computed with only three base values, but the process up to T_{0,2} (equivalent to the corner of the tableau) yields the true value to the displayed precision. This demonstrates how successive extrapolations progressively cancel higher-order error terms, improving from O(h^4) accuracy in the base approximations to O(h^{12}) at the final level.

Use in Numerical Differentiation

Richardson extrapolation enhances the accuracy of by combining s computed at successively halved step sizes to cancel lower-order error terms. The central difference formula provides a second-order to the first , D(h) = \frac{f(x + h) - f(x - h)}{2h} = f'(x) + \frac{h^2}{6} f'''(\xi) + O(h^4) for some \xi \in (x - h, x + h), where the leading error is O(h^2). To apply Richardson extrapolation, compute D(h), D(h/2), and D(h/4), then form the first-level extrapolations as T^{(1)}(h) = \frac{4 D(h/2) - D(h)}{3}, \quad T^{(1)}(h/2) = \frac{4 D(h/4) - D(h/2)}{3}, which eliminate the O(h^2) term, yielding O(h^4) accuracy. A second-level extrapolation further improves the order: T^{(2)}(h) = \frac{16 T^{(1)}(h/2) - T^{(1)}(h)}{15}, removing the O(h^4) term for O(h^6) or higher accuracy. This process systematically increases the order of convergence without requiring higher-order finite difference stencils directly. For illustration, consider approximating f'(1) where f(x) = \sin x, with exact value \cos 1 \approx 0.540302305868. Using initial step size h = 0.01, the approximations and extrapolations form the following tableau (values rounded to 7 decimal places for clarity; errors shown relative to the true value):
Level \ Steph = 0.01h = 0.005h = 0.0025
0 (D(h))0.5402933
(error: -9.0 \times 10^{-6})
0.5403001
(error: -2.2 \times 10^{-6})
0.5403017
(error: -5.6 \times 10^{-7})
1 (T^{(1)})0.5403023
(error: $1.4 \times 10^{-7})
0.5403023
(error: $2.4 \times 10^{-8})
2 (T^{(2)})0.540302306
(error: < 10^{-9})
The initial O(h^2) error of approximately $10^{-5} reduces to below $10^{-9} after two extrapolation levels, demonstrating the method's efficiency in achieving high precision. This technique is commonly used in physics simulations for precise computations in areas such as and fluid flow modeling, where accurate derivatives are essential for stability and fidelity. It also connects directly to the derivation of higher-order formulas, as the extrapolated expressions match explicit stencils like the six-point central difference.

Integration with Romberg Method

The Romberg method integrates Richardson extrapolation into numerical by applying it iteratively to approximations with step sizes halved at each stage, corresponding to the parameters b=2 and p=2 that match the leading O(h^2) error term of the . This technique, developed by Werner Romberg in 1955, systematically eliminates lower-order error terms to produce higher-accuracy estimates of definite integrals. By building a triangular array of extrapolations, the method accelerates beyond the basic , making it particularly effective for one-dimensional integrals over regular intervals. In the Romberg formulation, the first column of the tableau comprises trapezoidal rule evaluations I_{0,k} = T(h/2^k) for k = 0, 1, 2, \dots, where T(h) denotes the composite trapezoidal approximation with step size h and h is chosen such that the interval is divided into $2^k subintervals. Higher columns are generated via the Richardson extrapolation formula adapted for the even powers in the error expansion: I_{m,k} = \frac{4^m I_{m-1,k+1} - I_{m-1,k}}{4^m - 1}, \quad m = 1, 2, \dots, k = 0, 1, \dots The resulting entries in each column correspond to increasingly accurate quadrature rules; for instance, the second column yields approximations, while further columns produce higher-order Newton-Cotes formulas. This tableau structure allows for progressive refinement, where the bottom-right entry provides the highest-order estimate. A key theoretical foundation of the Romberg method lies in its synergy with the Euler-Maclaurin formula, which expands the error as an asymptotic series in even powers of h: T(h) - I = c_2 h^2 + c_4 h^4 + c_6 h^6 + \cdots. The extrapolations in Romberg successively cancel these terms, leading to exponential convergence for analytic or sufficiently smooth integrands, where the error decreases factorially with the number of levels rather than polynomially. For example, consider computing \int_0^1 e^{-x^2} \, dx \approx 0.7468241328124270, a smooth but non-elementary related to the . Starting with a coarse trapezoidal using a large h (e.g., 1 subinterval, error on the order of $10^{-2}), the Romberg tableau applies successive halvings and extrapolations; after 5-6 levels (requiring around function evaluations), it typically yields 15-20 decimal digits of accuracy, far surpassing the initial trapezoidal estimate. This demonstrates the method's power in transforming basic approximations into high-precision results for smooth functions. Modern extensions of Romberg integration include adaptive variants that incorporate error estimation to dynamically refine the grid, enhancing efficiency for integrands with localized singularities or over irregular domains where uniform halving is suboptimal. These adaptations, such as those embedded in software libraries like SAS's routine, maintain the core extrapolation while allowing flexible interval partitioning.

References

  1. [1]
    IX. The approximate arithmetical solution by finite differences of ...
    Richardson Lewis Fry. 1911IX. The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an ...
  2. [2]
  3. [3]
    [PDF] Richardson's Extrapolation - UW Math Department
    In the chapter on numerical integration, we see that this is the basis of a Romberg integration. Tim Chartier and Anne Greenbaum. Richardson's Extrapolation ...
  4. [4]
    [PDF] Math 541 - Numerical Differentiation and Richardson Extrapolation
    What it is: A general method for generating high-accuracy results using low-order formulas. Applicable when: The approximation technique has an error term of ...
  5. [5]
    [PDF] Scientific Computing Chapter II Basic Numerical Analysis
    Feb 11, 2002 · 4.1 Richardson extrapolation. Richardson extrapolation is a method that increases the order of accuracy of. an approximation provided that the ...
  6. [6]
    Examining Spatial (Grid) Convergence
    Feb 10, 2021 · Richardson extrapolation is a method for obtaining a higher-order estimate of the continuum value (value at zero grid spacing) from a series of ...
  7. [7]
    [PDF] Some pioneers of extrapolation methods - Nalag
    RICHARDSON'S EXTRAPOLATION. 16. Page 17. L.F. RICHARDSON: In 1910, Lewis Fry Richardson (1881 - 1953) eliminates the first term in a discretization process ...
  8. [8]
    [PDF] LEWIS FRY RICHARDSON AND HIS CONTRIBUTIONS TO ... - UFPR
    The life and major scientific contributions of Lewis Fry Richardson (1881–1953) are reviewed, with particular emphasis on his pioneering work in numerical anal-.Missing: origins | Show results with:origins
  9. [9]
    VIII. The deferred approach to the limit - Journals
    Richardson Lewis Fry and; Gaunt J. Arthur. 1927VIII. The deferred approach to the limitPhilosophical Transactions of the Royal Society of London. Series A ...
  10. [10]
    Methods of Numerical Integration - Google Books
    This book contains six chapters and begins with a discussion of the basic principles and limitations of numerical integration.
  11. [11]
    Werner Romberg (1909 - 2003) - Biography - MacTutor
    He was not the first to have this aim for Lewis Fry Richardson, in 1927, had introduced a special case of what today is called Richardson extrapolation.
  12. [12]
    Extrapolation methods in numerical integration - SpringerLink
    Extrapolation methods have been used for many years for numerical integration. The most well-known of these methods is Romberg integration.
  13. [13]
    [PDF] The History of Extrapolation Methods in Numerical Analysis - MADOC
    Summary: We give a short survey over the history of linear extrapolation methods, which are nowadays an important tool in numerical analysis.
  14. [14]
    [PDF] Numerical Analysis, 9th ed.
    ... Numerical Analysis. Copyright 2010 Cengage Learning. All Rights Reserved ... Richardson's Extrapolation 185. 4.3 Elements of Numerical Integration 193 v.
  15. [15]
    [PDF] Richardson's Extrapolation - UC Berkeley math
    Numerical Integration: general idea. ▷ Goal: Numerical method /quadrature for approximating. R b a f (x)dx. Page 15. Numerical Integration: general idea.Missing: analysis | Show results with:analysis
  16. [16]
    [PDF] Numerical Analysis I - Rice University Mark Embree - Faculty
    Jan 18, 2010 · Richardson extrapolation after the British applied ... recurrence relation like we have considered above. It has solutions ...
  17. [17]
    [PDF] Introduction to Numerical Analysis - UMD MATH
    Jan 4, 2021 · 5.4 Richardson's Extrapolation . ... obtained from the recurrence relation. (n + 1)Pn+1(x) − (2n + 1)xPn(x) + nPn−1(x)=0, n ⩾ 1,. (4.35).
  18. [18]
    A Complete Convergence and Stability Theory for a Generalized ...
    A very effective way of doing this is by the generalized Richardson extrapolation. In this paper this procedure is described and a very efficient recursive ...
  19. [19]
    Further convergence and stability results for the generalized ...
    In this work we continue the convergence and stability analysis of the generalized Richardson extrapolation process GREP(1) due to the author [6] that was begun ...2. Stability · 3. Convergence · 5. Numerical Examples
  20. [20]
    [PDF] limitations of richardson extrapolation for kernel density estimation
    Dec 18, 2018 · The method of Richardson Extrapolation is explained, showing how to fix conditioning issues that arise with higher-order extrapolations. Then, ...
  21. [21]
    [PDF] Fundamental Methods of Numerical Extrapolation With Applications
    Richardson and Richardson-like schemes are but a subset of all extrapolation methods, but they are among the most common. They have numerous applications and ...
  22. [22]
    4.2. Richardson Extrapolation — Numerical Methods and Analysis ...
    Aug 27, 2025 · (a) Apply Richardson extrapolation to the standard centered three-point, second order accurate approximation Equation (4.2); that is Q ( h ) := ...
  23. [23]
    [PDF] A Concise Introduction to Numerical Analysis Douglas N. Arnold
    ... formula for the error either. Instead we shall use Richardson extrapolation. Set γ = (α+β)/2 and let ˜S[α,β] = S[α,γ]f +S[γ,β]f, the double. Simpson's rule ...
  24. [24]
    [PDF] Richardson Extrapolation
    If Runge-Kutta is used k = 4. The notation O(hk+1) is conventionally used to stand for “a sum of terms of order hk+1 and higher”. So the above equation may ...Missing: standard | Show results with:standard
  25. [25]
    [PDF] Lecture 24: Richardson Extrapolation and Romberg Integration
    Oct 29, 2009 · In this lecture we describe a remarkable, fundamental tool of classical numerical analysis. Like alchemists who sought to convert lead into gold ...
  26. [26]
    Richardson's Extrapolation - Mathematics and Computer Science
    def richardson( f, x, n, h ): """Richardson's Extrapolation to approximate f'(x) at a particular x. USAGE: d = richardson( f, x, n, h ) INPUT: f - function ...
  27. [27]
    None
    ### Summary of Richardson Extrapolation for Differentiation (Section 8.8)
  28. [28]
    [PDF] Computational Physics with Maxima or R: Ch. 1, Numerical ... - CSULB
    Aug 31, 2015 · 1, Numerical Differentiation, Quadrature, and Roots ∗ ... The default method is "Richardson", which uses Richardson extrapolation (see the ?grad ...Missing: simulations | Show results with:simulations<|control11|><|separator|>
  29. [29]
    [PDF] Richardson Extrapolation and Romberg Integration
    There are many approximation procedures in which one first picks a step size h and then generates an approximation A(h) to some desired quantity A. Often ...
  30. [30]
    [PDF] Vereinfachte Numerische Integration - Romberg İntegrali
    7, 1955. 517.61. (517.2). Vereinfachte Numerische Integration. VON. WERNER ROMBERG. (Fremlagt i Fellesmøtet 14de februar 1955 av herr S. Selberg). Zu berechnen ...
  31. [31]
    [PDF] Bernoulli polynomials and the Euler-Maclaurin formula.
    The formula proves that the error can be written as an even power series of h, which is the theoretical fundament for the development of the Romberg integration ...
  32. [32]
    [PDF] Adaptive Romberg-Quadrature for the Sparse Grid Combination ...
    Sep 1, 2020 · Adaptive Romberg method (from Prager): Prager introduced an adaptive variant of the Romberg-. Quadrature [DR07, p.442] for one dimension. Let ...
  33. [33]
    QUAD Call - SAS Help Center
    Aug 11, 2020 · The QUAD subroutine is a numerical integrator based on adaptive Romberg-type integration techniques. See Rice (1973), Sikorsky (1982) ...