Fact-checked by Grok 2 weeks ago

Order of accuracy

In numerical analysis, the order of accuracy of a method quantifies the rate at which the error between the numerical approximation and the exact solution decreases as the discretization parameter, such as the step size h, approaches zero; specifically, a method has order p if the error satisfies |\tilde{u}_h - u| \leq C h^p for some constant C > 0 independent of h, where \tilde{u}_h is the approximation and u is the exact solution. This concept is fundamental to assessing the efficiency and reliability of algorithms used in solving differential equations, interpolation, integration, and other computational tasks. The order of accuracy distinguishes between local truncation error—the error introduced in a single step of the method—and global error—the accumulated error over multiple steps; for stable methods, if the local truncation error is O(h^{p+1}), the global error is typically O(h^p). Higher orders of accuracy, such as p = 4 in the classical fourth-order Runge-Kutta method, allow for more precise solutions with coarser grids or larger step sizes, reducing computational cost while maintaining fidelity to the underlying . Common examples include the forward , which achieves accuracy (p = 1) and is simple but less efficient for stiff problems, and the for integration, which attains second-order accuracy (p = 2). Determining the order often involves via Taylor expansions or empirical grid refinement studies, ensuring the method's under assumptions of solution smoothness. In practice, the choice of method balances order of accuracy against , , and the problem's characteristics, such as or dimensionality.

Fundamentals

Definition

In numerical analysis, approximation error denotes the discrepancy between an exact mathematical solution and its numerical counterpart, while convergence describes the property that this error diminishes to zero as the discretization parameter—typically the step size h—approaches zero. The order of accuracy provides a precise measure of the speed of this convergence, characterizing how rapidly the error decreases with finer discretizations. Formally, a possesses order of accuracy p if the absolute is bounded by |\text{[error](/page/Error)}| \leq C h^p, where C > 0 is a constant independent of h, and p > 0 is typically an indicating the method's class. Higher values of p imply faster reduction; for instance, halving h reduces the by a of $2^p. This bound assumes asymptotic behavior as h \to 0. A key distinction exists between local , which assesses the approximation quality over a single step, and , which accumulates discrepancies across multiple steps in iterative or marching methods. The concept of order of accuracy emerged in the early amid developments in and techniques, with early foundational work including Lewis Fry Richardson's development of methods in 1927 for improving approximations in solutions of differential equations.

Asymptotic Notation

In , the order of accuracy of a is formally described using asymptotic notation, particularly big-O (Landau) notation, to quantify how the scales with the step size h. Specifically, the E(h) is said to be O(h^p) as h \to 0 if there exists a positive constant C independent of h such that |E(h)| \leq C h^p for sufficiently small h > 0. This notation indicates that the is bounded above by a term proportional to h^p, providing an upper limit on the error's magnitude as the discretization becomes finer. For a stricter characterization, little-o notation is employed when the error diminishes faster than the bounding term. Here, E(h) = o(h^p) as h \to 0 means that \lim_{h \to 0} \frac{E(h)}{h^p} = 0, implying that E(h) is asymptotically negligible compared to h^p. This is useful for emphasizing that the error is dominated by higher-order effects or vanishes more rapidly than the big-O bound suggests. The value of p in these notations directly relates to the convergence rate of the : if p > 0, the approaches zero as h \to 0, ensuring to the exact solution, while larger p signifies faster and thus higher accuracy for a given step size. A frequent misunderstanding is that O(h^p) implies the is precisely c h^p for some constant c; in reality, it establishes only an upper bound, and the actual may scale differently (e.g., exactly like h^{p+1} or with additional logarithmic factors) while still satisfying the .

Determination Methods

Taylor Series Approach

The Taylor series approach provides an analytical method to determine the order of accuracy for numerical approximations by deriving the through series expansions of the underlying . This technique involves expanding the exact around a reference point using its and comparing it to the proposed approximation to identify the leading error term. The order of accuracy, denoted as p, corresponds to the exponent of the step size h in this dominant error term, indicating that the error scales as O(h^p) as h approaches zero. The step-by-step process begins with selecting a smooth function f(x) and a small h, then expanding f(x + h) in its around x: f(x + h) = f(x) + h f'([x](/page/Derivative)) + \frac{h^2}{2!} f''(x) + \frac{h^3}{3!} f'''(x) + \cdots + \frac{h^{p}}{p!} f^{(p)}(x) + R_p(h), where R_p(h) is the remainder term of higher order. Next, substitute this expansion into the approximation formula (e.g., a scheme) and rearrange to isolate the quantity being approximated, such as a . The resulting expression reveals the approximation plus an error series; the lowest-order non-zero term in the error determines p, as higher terms vanish faster for small h. This derivation assumes the function is sufficiently differentiable, allowing the series to converge locally. For validity, the method requires the to be at least p+1 times continuously differentiable in a neighborhood of the expansion point, ensuring the Taylor series accurately represents the local behavior and the remainder term is controllable. If this smoothness condition holds, the error can be bounded using the Lagrange form of the remainder, R_p(h) = \frac{h^{p+1}}{(p+1)!} f^{(p+1)}(\xi) for some \xi between x and x+h, confirming the order p. However, the approach has limitations when applied to non-smooth functions, such as those with discontinuities or points where higher derivatives are unbounded, as the may not converge or the assumptions break down, leading to unreliable error estimates. In such cases, the derived order may overestimate accuracy, necessitating alternative verification methods.

Empirical Verification

Empirical verification provides a computational approach to estimate the order of accuracy p for numerical methods, particularly when analytical derivations are infeasible or to validate implementations in practice. This method relies on observing how the error behaves as the parameter, such as step size h, is systematically reduced, assuming the error scales asymptotically as E(h) \approx C h^p for some constant C > 0. By comparing solutions at different h, practitioners can infer p numerically, offering a direct check against theoretical expectations derived from series expansions. One common technique is , which uses solutions computed at step sizes h and h/2 to estimate p. The errors are approximated as E_h \approx C h^p and E_{h/2} \approx C (h/2)^p, leading to the ratio E_h / E_{h/2} \approx 2^p. Thus, p \approx \log_2 \left( E_h / E_{h/2} \right), where E_h and E_{h/2} are either true errors relative to an exact solution or, when the exact is unavailable, differences between successive approximations (e.g., E_h \approx |\tilde{u}_h - \tilde{u}_{h/2}|). For more accurate estimation without the exact solution, three grid levels are preferred: p \approx \log_2 \left( \frac{|\tilde{u}_h - \tilde{u}_{h/2}|}{|\tilde{u}_{h/2} - \tilde{u}_{h/4}|} \right). This approach not only estimates p but can also accelerate convergence by constructing a higher-order approximation, such as R(h) = (2^p y_{h/2} - y_h) / (2^p - 1), which achieves order p+1. Another method involves log-log plotting, where the logarithm of the error \log |E_h| is plotted against \log h for a sequence of decreasing h. In the asymptotic regime, the plot yields a straight line with slope p, providing a visual and quantitative assessment of the convergence order. This technique is particularly effective for identifying deviations from expected behavior across multiple grid refinements. To implement these verification steps, select test problems with known exact solutions, such as integrating smooth functions like \sin x over a fixed interval, to enable direct error computation. Refine the grid by successively halving h (e.g., starting from a coarse value and proceeding to finer ones like h/2, h/4, h/8) while applying the numerical method consistently. Compute the errors as the absolute difference between the numerical approximation \tilde{u}_h and the exact solution u, then apply Richardson extrapolation or construct the log-log plot to estimate p from the slope or ratios in the asymptotic region. However, care must be taken with very small h, as round-off errors—arising from finite-precision arithmetic—can dominate the discretization error, flattening the convergence plot and underestimating p. To mitigate this, choose an initial h large enough for truncation errors to prevail, and consider adaptive refinement strategies that balance truncation and round-off contributions.

Applications

Finite Differences

Finite difference methods approximate derivatives of a using samples, providing a foundational application of order of accuracy in . These schemes construct approximations by differencing values at nearby points, with the order of accuracy determining how rapidly the diminishes as the grid spacing h decreases. The error is typically expressed as O(h^p), where p is the order, and higher p yields more precise approximations for sufficiently smooth functions. The simplest scheme is the forward difference, which approximates the first f'(x) as \frac{f(x+h) - f(x)}{h}. This formula achieves an order of accuracy p=1, with a leading term of O(h) arising from the first neglected term in the expansion of f(x+h). A symmetric counterpart, the backward difference \frac{f(x) - f(x-h)}{h}, also possesses p=1 and O(h) error, suitable for boundary points where forward stepping is infeasible. The central difference improves accuracy by averaging forward and backward approximations: \frac{f(x+h) - f(x-h)}{2h}. Using expansions around x, the linear terms align while the quadratic terms cancel, resulting in p=2 and error O(h^2). This cancellation of lower-order terms exemplifies how symmetric stencils enhance precision without additional evaluations. Higher-order schemes extend this principle by incorporating more points in the , solving for coefficients that eliminate additional terms. For instance, a five-point central can achieve p=4 for first derivatives, expressed generally as \sum_{k=-m}^{m} c_k f(x + k h) / h, where coefficients c_k are chosen via method of undetermined coefficients to match higher derivatives up to the desired order. Unequal spacing allows flexibility for irregular grids, maintaining high orders like p=4 or beyond through adjusted , though it increases computational overhead. These multi-point formulas are derived systematically to balance accuracy and stencil width. While high order improves accuracy, it does not ensure overall scheme reliability; must be verified separately, often via analysis, which examines amplification of modes to confirm bounded error growth. Unstable high-order schemes can amplify perturbations despite precise s.

Differential Equations

In the context of numerical solvers for initial value problems in (ODEs) of the form y' = f(t, y) with y(t_0) = y_0, the order of accuracy p quantifies the convergence rate of the approximate solution to the exact one as the step size h diminishes. A fundamental distinction arises between local truncation error and global error. The local truncation error, which assumes the input to a step is exact, is O(h^{p+1}) for a method of order p, capturing the discrepancy from truncating the exact solution over one step. The global error, however, accumulates these local contributions across the integration interval, resulting in O(h^p) due to error propagation under of f. Single-step methods exemplify how order p is achieved through targeted approximations. The explicit , defined by y_{n+1} = y_n + h f(t_n, y_n), attains order p = 1, as its local truncation error derives from a Taylor expansion of the exact solution. Higher-order accuracy is realized in Runge-Kutta methods, where order conditions—equating coefficients of the numerical and exact increments—ensure p up to 4 for explicit schemes; these conditions are systematically organized in the tableau, specifying stage coefficients a_{ij}, abscissae c_i, and weights b_i. Consistency for ODE methods mandates that the local truncation error vanishes as h \to 0, which holds for any order p \geq 1. The convergence theorem establishes that a consistent method, when paired with stability (zero-stability for the increment function), yields global convergence of order at least p, provided f satisfies a Lipschitz condition to bound error growth. Adaptive step-size control leverages local error estimates to optimize computational efficiency while meeting a user-specified tolerance. Error estimators, often derived from embedded Runge-Kutta pairs that compute solutions of orders p and p+1 using shared evaluations, approximate the local truncation error as their difference, scaling as O(h^{p+1}); this informs dynamic adjustments to h, enlarging it when errors are small and reducing it otherwise to balance accuracy and cost.

Examples

Forward Difference

The forward difference approximation for the first derivative of a smooth f at a point x is given by f'(x) \approx \frac{f(x + h) - f(x)}{h}, where h > 0 is a small increment. This formula arises from truncating the expansion of f(x + h) after the linear term, yielding an exact error of \frac{h}{2} f''(\xi) for some \xi \in (x, x + h), which establishes the method's accuracy since the error scales as O(h). To illustrate the order of accuracy numerically, consider f(x) = \sin x evaluated at x = \pi/2, where the exact derivative is f'(\pi/2) = \cos(\pi/2) = 0. The table below computes the approximation and absolute error for step sizes h = 0.1 and h = 0.05:
hApproximationAbsolute Error
0.1-0.049960.04996
0.05-0.025000.02500
When h is halved from 0.1 to 0.05, the absolute error halves from approximately 0.04996 to 0.02500, confirming the first-order convergence with p = 1. A log-log plot of the absolute error versus h for decreasing values of h produces a straight line with slope approximately 1, visually verifying the O(h) error behavior. Despite its accuracy, the forward difference remains useful in scenarios requiring one-sided approximations, such as at the left of a computational domain in schemes where prior data points are unavailable, or in systems tolerant of a minimal lookahead delay.

Trapezoidal Rule

The provides a simple for the definite of a f(x) over an [a, b] by treating the area under the as a trapezoid formed by the chord connecting the endpoints. For a single subinterval of width h = b - a, the rule states that \int_a^b f(x) \, dx \approx \frac{h}{2} \bigl( f(a) + f(b) \bigr), with a local given by E = -\frac{h^3}{12} f''(\xi) for some \xi \in (a, b), assuming f is twice continuously differentiable. This error expression reveals the method's second-order local accuracy, as the leading term scales with h^3./07%3A_Integration/7.02%3A_Trapezoidal_Rule_of_Integration) To achieve higher precision over the full interval, the composite trapezoidal rule partitions [a, b] into n equal subintervals, each of width h = (b - a)/n, yielding the \int_a^b f(x) \, dx \approx \frac{h}{2} \biggl( f(a) + 2 \sum_{i=1}^{n-1} f(a + i h) + f(b) \biggr). The global error for this composite form is E = -\frac{(b - a) h^2}{12} f''(\xi) for some \xi \in (a, b), demonstrating overall second-order accuracy with error scaling as O(h^2). This bound assumes f''(x) is continuous on [a, b]./07%3A_Integration/7.02%3A_Trapezoidal_Rule_of_Integration) The error formula can be derived briefly using Taylor series expansions of f around the endpoints a and b. Expanding f(x) to second order and integrating term by term shows that the linear interpolant matches the zeroth and first moments exactly, leaving the quadratic term \frac{1}{2} f''(\eta) (x - \eta)^2 (for some \eta) as the source of error; integrating this deviation over [a, b] yields the -\frac{h^3}{12} f''(\xi) term via the for integrals. A example illustrates this second-order behavior: consider \int_0^1 x^2 \, dx = \frac{1}{3} \approx 0.3333. For n=1 (h=1), the approximation is $0.5, with \approx 0.1667. For n=2 (h=0.5), it is $0.375, with \approx 0.04167. For n=4 (h=0.25), it is $0.34375, with \approx 0.01042. Doubling n (halving h) the each time—$0.1667 / 4 = 0.04167 and $0.04167 / 4 = 0.01042—consistent with O(h^2) scaling, as the ratio matches (1/2)^2 = 1/4./07%3A_Integration/7.02%3A_Trapezoidal_Rule_of_Integration)

References

  1. [1]
    [PDF] Order of Accuracy - KTH
    We denote the approximation by ˜uh. The numerical method has order of accuracy p if there is. a number C independent of h such that. |˜uh − u| ≤ Chp,
  2. [2]
    [PDF] - 1 - AM213B Numerical Methods for the Solution of Differential ...
    EN h( )= 0 That is, EN(h) = O(hp) with p > 0. Here p is called the order of accuracy. The order of a numerical method is the order of global error EN(h) ~ en(h ...
  3. [3]
    [PDF] First Order Numerical Methods - University of Utah Math Dept.
    In particular, order 2 means O(h2). In practical terms, the statements measure the quality and accuracy of the algorithms themselves, and hence establish an ...
  4. [4]
    [PDF] Numerical approximations of solutions of ordinary differential ...
    Order of accuracy. Definition (Order of accuracy). The numerical method (21) is said to have order of accuracy p, if p is the largest positive integer such ...
  5. [5]
    IX. The approximate arithmetical solution by finite differences of ...
    Richardson Lewis Fry. 1911IX. The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an ...
  6. [6]
    5. Measures of Error and Order of Convergence
    Measuring the rate of convergence of a sequence of approximations. Big-O and little-o notation for describing how small a quantity (usually an error) is. 5.1 ...
  7. [7]
    Errors and Complexity - CS 357 - Course Websites
    Big-O Notation. Big-O notation is used to understand and describe asymptotic behavior. ... A numerical method is called n n -th order accurate if its ...
  8. [8]
    [PDF] Lecture 1 Taylor series and finite differences
    The order of accuracy of an approximation refers to the scaling with mesh ... The second order approximations to the first and second derivatives of a.
  9. [9]
    [PDF] 5 Numerical Differentiation - UMD MATH
    In other words, one should truncate the Taylor series after the leading term in the error has been identified. Richardson's extrapolation can be viewed as a ...
  10. [10]
    Finite Difference Approximating Derivatives
    For an approximation that is O(hp), we say that p is the order of the accuracy of the approximation. With few exceptions, higher order accuracy is better than ...<|control11|><|separator|>
  11. [11]
    2.02: Numerical Differentiation of Continuous Functions
    Oct 5, 2023 · The true errors in the forward and backward divided difference approximations are of the same order. Is that also an isolated case? To ...Forward Difference... · Central Divided Difference... · Lesson 3: Error Analysis of...
  12. [12]
    [PDF] Numerical differentiation: finite differences
    It is appropriate to use a forward difference at the left endpoint x = x1, a backward difference at the right endpoint x = xn, and centered difference formulas ...
  13. [13]
    [PDF] High-order compact finite difference methods
    expressions can then be differenced compactly on the stencil, and this leads to a family of compact higher-order difference schemes [9, 10, 11, 1]. In the ...
  14. [14]
    High-Order Narrow Stencil Finite-Difference Approximations of ...
    High-order-accuracy finite-difference approximations are developed for problems involving arbitrary variable coefficients in the second-order derivatives, ...
  15. [15]
    von Neumann stability analysis - Richard Fitzpatrick
    Our simple finite difference algorithm for solving the 1-d diffusion equation is subject to a numerical instability under certain circumstances.
  16. [16]
    [PDF] Notes: von Neumann Stability Analysis - MIT Mathematics
    Apr 26, 2022 · A von Neumann stability analysis can be carried out for constant coefficients linear finite differences schemes only. It is based on the fact ...
  17. [17]
    [PDF] An Introduction to Numerical Methods for ODEs
    We will introduce first-order differential equations, providing examples with and without solutions, as well as a statement on existence and uniqueness. We then ...
  18. [18]
    [PDF] Numerical solution of ordinary differential equations: One step ...
    Oct 3, 2021 · “local truncation error behaves like O(τp+1)” + “Increment function satisfies a Lipschitz condition”. ⇒ “global truncation error behaves like O( ...<|control11|><|separator|>
  19. [19]
    Euler's Method (First Order Runge-Kutta) - Swarthmore College
    Because all of the dropped terms are multiplied by h2 or greater, we say that the algorithm is accurate to order h2 locally, or O(h2) (if h is small the other ...Problem Statement · Euler's Method (Intuitive) · First Order Non-Linear...
  20. [20]
    [PDF] John Butcher's tutorials - Introduction to Runge--Kutta methods
    Shortly afterwards Kutta gave a detailed analysis of order 4 methods. In the early days of Runge–Kutta methods the aim seemed to be to find explicit methods ...
  21. [21]
    [PDF] numerical methods for solving ordinary differential equations
    Consistency and stability imply convergence. This easy-to-remember claim is true for. ODE (and PDE) solvers. In this section, we will define these three ...Missing: Dahlby | Show results with:Dahlby
  22. [22]
    Adaptive Runge–Kutta — Fundamentals of Numerical Computation
    We will employ the basic strategy of Adaptive integration: adapt the time step size in order to reach an accuracy goal, as measured by an error estimate.
  23. [23]
    [PDF] Numerical Differentiation
    Feb 8, 2009 · If the Taylor series exists, then knowing f(x) and all of its derivatives at x = a, we can find the value of f(x) at some x different from a ...
  24. [24]
    [PDF] Numerical Differentiation
    We can create forward difference formula of order O(h2). For this ... forward difference approximation) and the values of f at these nodes, for instance.Missing: citation | Show results with:citation
  25. [25]
    [PDF] Derivative Approximation by Finite Differences - Geometric Tools
    Feb 1, 2020 · The p-column refers to the error term O(hp). The t-column refers to the type of the approximation: f for a forward difference (imin ≥ 0), b for ...
  26. [26]
    [PDF] Real‐time models for systems with costly or unknown dynamics
    Jul 18, 2024 · to come up with causal ... time-derivative by divided differences [3]. f ≈ ̂f = ⎧. ⎪. ⎨. ⎪. ⎩. x(t + ℎ) − x(t). ℎ forward difference (1st order).
  27. [27]
    [PDF] Numerical integration and the redemption of the trapezoidal rule
    Jan 5, 2011 · Many numerical-analysis textbooks instead analyze the error of the trapezoidal rule via something called Bernoulli polynomials; this approach ...