Fact-checked by Grok 2 weeks ago

Numerical differentiation

Numerical differentiation is a branch of that approximates the derivatives of a mathematical using finite differences based on samples of the function's values, rather than relying on exact analytical expressions. This approach is particularly valuable when the function is only known at tabulated points, such as from experimental , or when differentiation is computationally infeasible due to . The foundational methods of numerical differentiation revolve around formulas derived from expansions, which provide approximations for first and higher-order s with controlled truncation errors. Common techniques include the forward difference for estimating the at a point using values ahead, given by f'(x) \approx \frac{f(x+h) - f(x)}{h} with first-order accuracy O(h); the backward difference using values behind, f'(x) \approx \frac{f(x) - f(x-h)}{h}, also O(h); and the more accurate central difference, f'(x) \approx \frac{f(x+h) - f(x-h)}{2h}, achieving second-order accuracy O(h^2). For second derivatives, a standard central approximation is f''(x) \approx \frac{f(x+h) - 2f(x) + f(x-h)}{h^2}, again with O(h^2) error. Higher-order methods enhance precision by incorporating more points or extrapolation techniques, such as , which can elevate accuracy to fourth order, for example, f'(x) \approx \frac{-f(x+2h) + 8f(x+h) - 8f(x-h) + f(x-2h)}{12h} with O(h^4) error. Alternative approaches involve interpolating polynomials, like or , to derive derivatives from unequally spaced data. Error analysis is crucial, balancing truncation errors (from the approximation) against round-off errors (from finite precision), often requiring careful selection of step size h to minimize total error. These techniques underpin applications in solving differential equations, optimization, and scientific simulations where continuous differentiability cannot be assumed.

Basic Finite Difference Approximations

Forward Difference

The forward difference provides the simplest one-sided approximation for the first of a f at a point x, expressed as f'(x) \approx \frac{f(x + h) - f(x)}{h}, where h > 0 denotes a small step size. Geometrically, this formula computes the slope of the joining the points (x, f(x)) and (x + h, f(x + h)) on the of f. The approximation's accuracy follows from the Taylor series expansion of f(x + h) around x: f(x + h) = f(x) + h f'(x) + \frac{h^2}{2} f''(\xi) for some \xi \in (x, x + h). Dividing the difference f(x + h) - f(x) by h then simplifies to \frac{f(x + h) - f(x)}{h} = f'(x) + \frac{h}{2} f''(\xi), revealing a truncation error term of order O(h). Finite difference methods, including the forward difference, emerged in the 17th century through the work of and , who employed them in developing for interpolation and solving physical problems. For illustration, consider approximating f'(1) where f(x) = \sin x and h = 0.1. The exact derivative value is \cos 1 \approx 0.5403. The forward difference yields \frac{\sin(1.1) - \sin(1)}{0.1} \approx \frac{0.8912 - 0.8415}{0.1} = 0.497, with an error of approximately $0.043, aligning with the expected O(h) behavior.

Backward Difference

The backward difference provides a one-sided approximation to the first derivative of a function f at a point x, utilizing function values at x and a point to its left. The formula is given by f'(x) \approx \frac{f(x) - f(x - h)}{h}, where h > 0 is the step size. This approximation represents the slope of the connecting the points (x - h, f(x - h)) and (x, f(x)). To derive its accuracy, expand f(x - h) using the around x: f(x - h) = f(x) - h f'(x) + \frac{h^2}{2} f''(x) - \frac{h^3}{6} f'''(\xi) for some \xi \in (x - h, x). Substituting and rearranging yields \frac{f(x) - f(x - h)}{h} = f'(x) - \frac{h}{2} f''(x) + \frac{h^2}{6} f'''(\xi), confirming that the is O(h). The backward difference is particularly useful in scenarios where function evaluations to the right of x are unavailable or infeasible, such as at the upper endpoint of a computational or in real-time simulations relying on past . For example, consider approximating f'(2) where f(x) = e^x and h = 0.1. Then f(2) \approx 7.3891 and f(1.9) \approx 6.6859, so f'(2) \approx \frac{7.3891 - 6.6859}{0.1} = 7.0316. The exact is f'(2) = e^2 \approx 7.3891, yielding an absolute error of approximately 0.3575.

Central Difference

The central difference approximation provides a symmetric, two-sided estimate of the first of a f at an interior point x, formulated as f'(x) \approx \frac{f(x + h) - f(x - h)}{2h}, where h > 0 is a small step size. This method achieves second-order accuracy, with a local of O(h^2). The derivation of this formula utilizes expansions of f(x + h) and f(x - h) around x, assuming f is sufficiently (i.e., has continuous derivatives up to the third order): f(x + h) = f(x) + h f'(x) + \frac{h^2}{2} f''(x) + \frac{h^3}{6} f'''(\xi_1), \quad \xi_1 \in (x, x + h), f(x - h) = f(x) - h f'(x) + \frac{h^2}{2} f''(x) - \frac{h^3}{6} f'''(\xi_2), \quad \xi_2 \in (x - h, x). Subtracting the second expansion from the first eliminates the constant and even-powered terms, yielding f(x + h) - f(x - h) = 2h f'(x) + \frac{h^3}{6} [f'''(\xi_1) + f'''(\xi_2)]. Dividing by $2h then isolates f'(x) + \frac{h^2}{12} [f'''(\xi_1) + f'''(\xi_2)], confirming the O(h^2) error. This approach offers key advantages over one-sided methods like forward or backward differences, which exhibit only O(h) accuracy; the central difference's symmetry cancels leading error terms, providing higher precision for equivalent h and greater robustness to step-size selection at interior points. However, it necessitates evaluating f on both sides of x, making it inapplicable at domain boundaries where x - h falls outside the available data or interval. For example, to approximate f'(0) with f(x) = \cos x (where the exact is f'(x) = -\sin x, so f'(0) = 0) and h = 0.1, the forward difference gives \frac{\cos(0.1) - \cos(0)}{0.1} \approx \frac{0.995004165 - 1}{0.1} = -0.04995835, with an absolute error of about 0.04996. In comparison, the central difference produces \frac{\cos(0.1) - \cos(-0.1)}{0.2} = 0, yielding zero error and illustrating the method's enhanced accuracy due to .

Error Analysis and Step Size Selection

Truncation Error

Truncation error in numerical differentiation arises from the approximation of derivatives using finite differences, which inherently truncates the infinite expansion of the function. For the forward difference approximation f'(x) \approx \frac{f(x+h) - f(x)}{h}, the Taylor expansion of f(x+h) around x yields f(x+h) = f(x) + h f'(x) + \frac{h^2}{2} f''(\xi) for some \xi \in [x, x+h], leading to a truncation error of \frac{h}{2} f''(\xi). Similarly, the backward difference f'(x) \approx \frac{f(x) - f(x-h)}{h} produces an error of -\frac{h}{2} f''(\xi) for \xi \in [x-h, x]. The central difference formula f'(x) \approx \frac{f(x+h) - f(x-h)}{2h} achieves higher accuracy by canceling the leading error terms from the expansions of f(x+h) and f(x-h), resulting in a of -\frac{h^2}{6} f'''(\xi) for some \xi \in [x-h, x+h]. One-sided differences like forward and backward are accurate with O(h), while the central difference is second-order accurate with O(h^2). In general, for a of order k, the is R = O(h^k), where smaller h reduces the error, promoting to the true as h \to 0, though this must be balanced against round-off errors from finite precision arithmetic. To illustrate, consider bounding the truncation error for the central difference approximation of the first of f(x) = x^3 at x=1 with h=0.1. Here, f'''(x) = 6 is constant, so the error magnitude is \left| \frac{(0.1)^2}{6} \cdot 6 \right| = 0.01. This bound demonstrates how the error scales quadratically with h, confirming the second-order accuracy.

Round-off Error

In numerical differentiation, arises from the finite precision of , particularly when computing differences like f(x + h) - f(x), where small h leads to subtractive cancellation. This cancellation occurs because f(x + h) and f(x) are nearly equal, causing the loss of significant digits in their difference, with the relative error bounded by the \varepsilon, which is approximately $2^{-52} \approx 2.22 \times 10^{-16} for double-precision floating-point numbers. The resulting absolute error in the difference is on the order of \varepsilon |f(x)|, which, when divided by h to approximate the , amplifies to roughly \varepsilon |f(x)| / h. The total error in a finite difference approximation combines this round-off component with the truncation error from the Taylor series expansion. For a forward difference, the total error is approximately O(h) + O(\varepsilon / h), where the first term is truncation and the second is round-off; similar bounds hold for central differences, such as O(h^2) + O(\varepsilon / h). As h decreases, the truncation error diminishes, but the round-off term grows, leading to ill-conditioning where the relative error scales as O(\varepsilon / h). This amplification makes the approximation unstable for excessively small h, often around $10^{-8} or smaller in double precision. To mitigate round-off error, the primary strategy is selecting an appropriate h that balances the two error components, though techniques like higher-precision arithmetic or compensated can reduce cancellation effects in some cases. For example, in approximating the of f(x) = \sin(x) at x = 1 (where the exact value is \cos(1) \approx 0.5403) using the central difference formula \frac{\sin(1 + h) - \sin(1 - h)}{2h}, a step size of h = 10^{-4} yields an error of about $10^{-9}, dominated by , while h = 10^{-8} results in an error of roughly $10^{-7}, where round-off dominates due to precision limits.

Optimal Step Size

In numerical differentiation, the total error combines truncation error from the finite difference approximation and round-off error from floating-point arithmetic. The optimal step size h minimizes this total error by balancing the two components, which generally decrease and increase with h, respectively. Truncation and round-off errors thus represent competing sources that require careful selection of h to achieve the best accuracy. For the central difference formula, the truncation error is on the order of \frac{h^2}{6} |f'''(x)|, while the round-off error is approximately \frac{\epsilon |f(x)|}{h}, where \epsilon denotes machine epsilon. Minimizing the sum of these error terms yields the optimal step size h_\text{opt} \approx \left( \frac{3 \epsilon |f(x)|}{|f'''(x)|} \right)^{1/3}. When the function value and its third derivative are of order unity, this simplifies to approximately h \approx \sqrt{\epsilon}, though the more precise cube-root scaling reflects the O(h^2) truncation order. In double-precision arithmetic (\epsilon \approx 2 \times 10^{-16}), a practical rule of thumb is h_\text{opt} \approx 10^{-8}, which can be adjusted upward for smoother functions with smaller higher derivatives to further reduce round-off dominance. The dependence of h_\text{opt} on the function's derivatives highlights its sensitivity: larger |f'''(x)| necessitates a smaller h to suppress , while the magnitude of |f(x)| influences the round-off contribution, often requiring by the local function scale. Smoother functions, with milder higher derivatives, permit larger h values without excessive , improving overall robustness across different problem scales. A typical plot of total error versus \log_{10}(h) exhibits a characteristic U-shaped , with the minimum occurring near h_\text{opt} where and round-off are roughly equal; to the left (small h), round-off dominates and rises steeply, while to the right (large h), causes a gradual increase. This visualization underscores the narrow range of effective h values, often spanning 2–3 orders of magnitude around the optimum. As an illustrative example, consider f(x) = e^x at x = 0, where f'(0) = 1, f(0) = 1, and f'''(0) = 1. With \epsilon = 2 \times 10^{-16}, h_\text{opt} \approx (6 \times 10^{-16})^{1/3} \approx 8.4 \times 10^{-6}. The central difference approximation \frac{f(h) - f(-h)}{2h} = \frac{\sinh(h)}{h} then yields a value of approximately 1.00000000000, with on the order of $10^{-11}—a substantial improvement over h = 10^{-3} ( \approx 1.7 \times 10^{-7}) or h = 10^{-10} ( \approx 2 \times 10^{-6}), demonstrating the benefit of optimization.

Higher-Order Finite Difference Methods

Multi-Point Formulas

Multi-point formulas for approximating the first employ stencils involving more than three evaluations to attain higher-order accuracy, typically O(h^k) with k > 2, by canceling lower-order error terms in the expansion. These formulas are constructed by assuming a interpolant of degree m-1 through m points and differentiating it, or equivalently, by solving a derived from expansions around the evaluation point to ensure the coefficients yield the desired while nullifying constant, linear, and lower-order terms up to the target order. The approach provides explicit coefficients via the formula for the of the interpolating , while the method sets up equations like \sum c_i = 0, \sum i c_i = 1, \sum i^j c_i = 0 for j=2 to m-1, scaled by 1/h. A representative example is the five-point forward difference formula, which achieves O(h^4) accuracy using points at x, x+h, x+2h, x+3h, and x+4h: f'(x) \approx \frac{-25 f(x) + 48 f(x + h) - 36 f(x + 2h) + 16 f(x + 3h) - 3 f(x + 4h)}{12 h}. This formula arises from solving the Taylor coefficient system for forward-biased points, with the leading error term involving f^{(5)}(\xi) h^4 / 30 for some \xi in the interval. Central multi-point formulas, symmetric around x, are often preferred for their even higher efficiency in error reduction per evaluation, as they naturally eliminate odd-powered error terms. The coefficients for these approximations on a uniform grid, omitting the 1/h factor, are tabulated below for accuracy orders up to sixth (seven-point ). These weights w_k satisfy f'(x) \approx \sum_{k=-n}^{n} w_k f(x + k h) / h, derived via the method of undetermined coefficients from .
Accuracy OrderStencil Points (k)Coefficients w_k (for k = -n to n)
2 (O(h^2))-1, 0, 1-1/2, 0, 1/2
4 (O(h^4))-2 to 21/12, -2/3, 0, 2/3, -1/12
6 (O(h^6))-3 to 3-1/60, 3/20, -3/4, 0, 3/4, -3/20, 1/60
The leading error for the sixth-order formula is -h^6 f^{(7)}(\xi) / 140 for some \xi near x. Although multi-point formulas significantly reduce —scaling as O(h^k) with larger k—the computational cost rises linearly with the number of points required, which can amplify round-off errors in finite-precision arithmetic and limit practicality for very high orders or noisy data. For illustration, consider applying the five-point central formula (O(h^4)) to f(x) = \sin x at x = \pi/4 with h = 0.1. The exact derivative is \cos(\pi/4) = \sqrt{2}/2 \approx 0.707107. The function values are f(\pi/4 - 0.2) \approx 0.552881, f(\pi/4 - 0.1) \approx 0.633436, f(\pi/4 + 0.1) \approx 0.774740, and f(\pi/4 + 0.2) \approx 0.834164. Substituting into the formula yields \approx 0.707483, with absolute error \approx 3.76 \times 10^{-4}, demonstrating the enhanced accuracy over the basic O(h^2) central difference (error \approx 5.87 \times 10^{-4}).

Richardson Extrapolation

Richardson extrapolation is a technique in for enhancing the accuracy of approximations by combining estimates obtained with different step sizes, thereby canceling lower-order error terms in the expansion. Developed by as part of his pioneering efforts in numerical solutions to differential equations for weather prediction, the method systematically refines approximations without requiring the derivation of higher-order stencils. The core principle stems from the asymptotic nature of the error in numerical differentiation formulas. Consider a base approximation A(h) to the first derivative f'(x), which admits an expansion of the form A(h) = f'(x) + c h^k + O(h^{k+1}), where k \geq 1 is the , c is a constant depending on higher derivatives of f, and the remaining terms are of higher . The approximation with halved step size is then A\left(\frac{h}{2}\right) = f'(x) + c \left(\frac{h}{2}\right)^k + O(h^{k+1}) = f'(x) + \frac{c h^k}{2^k} + O(h^{k+1}). Subtracting the first equation from $2^k times the second yields $2^k A\left(\frac{h}{2}\right) - A(h) = (2^k - 1) f'(x) + O(h^{k+1}), so solving for f'(x) gives the improved estimate f'(x) \approx \frac{2^k A\left(\frac{h}{2}\right) - A(h)}{2^k - 1} + O(h^{k+1}), which achieves k+1 accuracy. This eliminates the leading h^k error term, assuming the expansion holds and the constants c are the same for both approximations. A common application upgrades the second-order central difference formula, A(h) = \frac{f(x+h) - f(x-h)}{2h}, which has k=2 and truncation error involving f'''(\xi) h^2 / 6 for some \xi. Here, the extrapolation formula simplifies to f'(x) \approx \frac{4 A\left(\frac{h}{2}\right) - A(h)}{3}, yielding fourth-order accuracy O(h^4). This requires three function evaluations at x \pm h/2, x \pm h, but effectively uses the base central difference on multi-point stencils. For higher-order improvements, the process is applied recursively, forming a tableau analogous to that in Romberg integration for . Starting with column entries from successively halved step sizes h, h/2, h/4, \dots, each subsequent column combines prior entries using the general formula with increasing k, such as k=4 for the next level: \frac{16 B\left(\frac{h}{2}\right) - B(h)}{15}, where B(h) is the previous extrapolation. This builds a triangular where diagonal or final column entries provide estimates of order O(h^{2m}) after m iterations, assuming even powers dominate as in symmetric differences. The method requires that error terms follow a power series in even (or adjustable for odd) powers of h, which holds for smooth functions and centered formulas but may need modification for one-sided approximations. It demands multiple function evaluations—doubling roughly with each refinement level—making it computationally efficient only when base evaluations are inexpensive or shared across levels. For odd-order base methods, adjustments like weighting by $2^k + 1 in the denominator can accommodate the expansion. As an illustrative example, consider improving the central difference approximation to f'(0) for f(x) = x^2, where the exact derivative is f'(x) = 2x so f'(0) = 0. The base formula gives A(h) = \frac{h^2 - (-h)^2}{2h} = 0 exactly, independent of h, due to the nature canceling the leading error (since f'''(x) = 0). Applying with h and h/2 yields \frac{4 \cdot 0 - 0}{3} = 0, confirming the exactness and demonstrating that the method preserves accuracy when lower-order terms vanish. For a non-trivial case like f(x) = e^x at x=0 (exact f'(0)=1), with h=0.1: A(0.1) \approx 1.001667, A(0.05) \approx 1.000417, and the extrapolated value \frac{4 \cdot 1.000417 - 1.001667}{3} \approx 1.000000, reducing error from O(10^{-3}) to nearly machine precision.

Approximating Higher Derivatives

Second Derivatives

The central finite difference approximation for the second derivative of a f at a point x is given by f''(x) \approx \frac{f(x + h) - 2f(x) + f(x - h)}{h^2}, which achieves second-order accuracy, O(h^2). This formula arises from expansions of f(x + h) and f(x - h) around x: f(x + h) = f(x) + h f'(x) + \frac{h^2}{2} f''(x) + \frac{h^3}{6} f'''(x) + \frac{h^4}{24} f^{(4)}(\xi_1), f(x - h) = f(x) - h f'(x) + \frac{h^2}{2} f''(x) - \frac{h^3}{6} f'''(x) + \frac{h^4}{24} f^{(4)}(\xi_2), for some \xi_1 \in (x, x + h) and \xi_2 \in (x - h, x). Adding these expansions cancels the odd-powered terms, yielding f(x + h) + f(x - h) = 2f(x) + h^2 f''(x) + \frac{h^4}{24} \left[ f^{(4)}(\xi_1) + f^{(4)}(\xi_2) \right]. Rearranging and dividing by h^2 gives the approximation, with -\frac{h^2}{12} f^{(4)}(\xi) for some \xi \in (x - h, x + h), confirming the O(h^2) accuracy. When function evaluations are unavailable on one side of x, one-sided approximations are used; for example, the forward three-point f''(x) \approx \frac{f(x + 2h) - 2f(x + h) + f(x)}{h^2} has accuracy, O(h). Second-derivative approximations are particularly sensitive to round-off errors because the numerator involves subtractions of nearly equal values (f(x + h) + f(x - h) - 2f(x)), which is O(h^2) while individual terms are O(1), amplifying relative floating-point errors by a factor of roughly $1/h^2. For instance, consider approximating f''(1) where f(x) = \sin x and h = 0.1. The true value is f''(1) = -\sin(1) \approx -0.841471. The central formula yields \frac{\sin(1.1) - 2\sin(1) + \sin(0.9)}{0.01} \approx -0.840770, with absolute error \approx 7.01 \times 10^{-4}. The error bound is \frac{h^2}{12} \max |\sin \xi| \leq \frac{0.01}{12} \approx 8.33 \times 10^{-4}, consistent with the observed value since |f^{(4)}(x)| = |\sin x| \leq 1.

General Higher Derivatives

To approximate the nth derivative f^{(n)}(x) using finite differences, the forward difference \Delta is applied iteratively, where \Delta f(x) = f(x + h) - f(x), and higher powers are defined recursively as \Delta^n f(x) = \Delta (\Delta^{n-1} f(x)). This yields the explicit form \Delta^n f(x) = \sum_{k=0}^n \binom{n}{k} (-1)^{n-k} f(x + k h), and the approximation is given by \frac{\Delta^n f(x)}{h^n} \approx f^{(n)}(x), with a leading of O(h). For central difference formulas, which generally provide higher accuracy by symmetrizing around x, the approximations for even and odd n employ binomial coefficients derived from Taylor expansions or generating functions. These stencils use points symmetric about x, such as \sum_{k=-m}^m c_k f(x + k h) / h^n \approx f^{(n)}(x), where the coefficients c_k are chosen to cancel lower-order terms up to the desired accuracy. The general error in these finite difference approximations is O(h^k), where k is the order of accuracy determined by the number of points and their arrangement in the stencil; for example, using n+1 points yields at best O(h) for the forward case, while central schemes with more points can achieve higher k. A key challenge in computing higher derivatives numerically arises from the oscillatory nature of the coefficients in the stencils, particularly for large n, where alternating signs lead to severe cancellation and amplify round-off errors, often rendering the approximations unstable beyond third or fourth order without high-precision arithmetic. As an illustrative example, consider approximating the third derivative of f(x) = x^4 at x = 0 with h = 0.1 using the forward difference stencil: \frac{\Delta^3 f(0)}{h^3} = \frac{-f(0) + 3f(0.1) - 3f(0.2) + f(0.3)}{0.001} = \frac{3(0.0001) - 3(0.0016) + 0.0081}{0.001} = 3.6, while the exact f'''(0) = 0, demonstrating the O(h) truncation error of 0.1 times a higher derivative term.

Alternative Numerical Methods

Complex-Step Methods

Complex-step methods provide a numerical approximation to the of a real-valued by evaluating the at a complex argument, leveraging arithmetic to achieve high accuracy without the subtractive cancellation inherent in traditional schemes. The core formula is given by f'(x) \approx \frac{\operatorname{Im} \left[ f(x + i h) \right]}{h}, where i = \sqrt{-1} is the and h > 0 is a small real step size; this approximation attains second-order accuracy, O(h^2), using only a single function evaluation offset from the real axis. This formula derives from the Taylor series expansion of the f around x, extended to the : f(x + i h) = f(x) + i h f'(x) - \frac{h^2}{2!} f''(x) - i \frac{h^3}{3!} f'''(x) + \cdots. The imaginary part is \operatorname{Im} \left[ f(x + i h) \right] = h f'(x) - \frac{h^3}{3!} f'''(x) + \cdots, so dividing by h yields the approximation with O(h^2). Equivalently, by the Cauchy-Riemann equations for the holomorphic extension f(z) = u(x,y) + i v(x,y), the satisfies f'(x) = \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} at y = 0, making \frac{\operatorname{Im} \left[ f(x + i h) \right]}{h} \approx \frac{v(x, h) - v(x, 0)}{h} a one-sided in the imaginary direction that mirrors the accuracy of a central difference in the real direction but avoids symmetric evaluations. The primary advantages of complex-step methods stem from the absence of subtractive cancellation in the formula, as there is no differencing of nearly equal real quantities; this allows h to be as small as the (typically \approx 10^{-16} in double precision), yielding near-analytical accuracy limited primarily by rather than round-off. Such methods are particularly valuable for codes written for real , which can often be extended to complex inputs with minimal modifications, enabling precise in applications like optimization and . However, complex-step methods require the function to admit an to a neighborhood in the , restricting applicability to holomorphic functions; non-analytic or discontinuous functions cannot be reliably differentiated this way. For illustration, consider f(x) = e^x at x = 0, where the exact is f'(0) = 1. Using the complex-step with h = 10^{-8}, f( i \cdot 10^{-8} ) = e^{i \cdot 10^{-8}} = \cos(10^{-8}) + i \sin(10^{-8}), so \frac{\operatorname{Im} \left[ e^{i \cdot 10^{-8}} \right]}{10^{-8}} = \frac{\sin(10^{-8})}{10^{-8}} \approx 1.000000000000000, which is accurate to nearly 16 decimal places, far superior to forward finite differences at the same h, where round-off errors degrade the result to about 7-8 digits.

Differential Quadrature

Differential quadrature is a numerical technique for approximating the derivatives of a function at discrete grid points by expressing them as weighted linear sums of the function values at those points.Bellman et al., 1972 Specifically, the nth-order derivative at point x_i is approximated as f^{(n)}(x_i) \approx \sum_{j=1}^N w_{ij}^{(n)} f(x_j), where N is the number of grid points, and the weighting coefficients w_{ij}^{(n)} are derived from polynomial interpolation schemes, such as Lagrange or Hermite interpolation.Bellman et al., 1972Shu & Richards, 1992 This approach generalizes local finite difference methods by incorporating global information across the entire grid, enabling high-order accuracy without explicitly constructing the full interpolating polynomial. The weighting coefficients are typically computed using the properties of the Lagrange interpolation basis. For the first-order derivative, the coefficients A_{ij} = w_{ij}^{(1)} are given by A_{ii} = -\sum_{\substack{j=1 \\ j \neq i}}^N \frac{1}{x_i - x_j}, \quad A_{ij} = \frac{1}{x_i - x_j} \prod_{\substack{k=1 \\ k \neq i,j}}^N \frac{x_i - x_k}{x_j - x_k} \quad (i \neq j), or equivalently through an efficient formulation involving precomputed constants c_k = \prod_{\substack{m=1 \\ m \neq k}}^N \frac{1}{x_k - x_m}, where A_{ij} = c_i / (c_j (x_i - x_j)) for i \neq j and A_{ii} = -\sum_{j \neq i} A_{ij}.Shu & Richards, 1992 Higher-order coefficients w_{ij}^{(n)} for n \geq 2 are obtained recursively from the lower-order ones, such as B_{ij} = w_{ij}^{(2)} = \sum_{k=1}^N A_{ik} w_{kj}^{(1)}, allowing systematic extension to arbitrary derivative orders.Shu & Richards, 1992Bert & Malik, 1996 This method achieves high-order accuracy, often up to order N-1 for smooth functions on N points, surpassing traditional finite differences in efficiency for the same accuracy level.Bert & Malik, 1996 When applied to non-uniform grids, such as Chebyshev-Lobatto points, differential quadrature exhibits spectral-like convergence rates, with errors decreasing exponentially with N for analytic functions, making it particularly effective for problems requiring precise derivative evaluations over the .Shu, 2000Bert & Malik, 1996 While primarily employed for solving boundary value problems in ordinary and partial differential equations—such as structural vibrations, heat conduction, and —differential quadrature also serves as a standalone tool for direct derivative approximation in .Bellman et al., 1972Bert & Malik, 1996Shu, 2000 Its grid-based, multi-point nature distinguishes it from local methods, offering advantages in stability and accuracy for global problems. For illustration, consider approximating the first and second derivatives of f(x) = \sin(\pi x) on a uniform grid with N=5 points in [0,1], at x_i = (i-1)/4 for i=1,\dots,5. The function values are f(x_i) = [0, \sqrt{2}/2, 1, \sqrt{2}/2, 0]. Using the Lagrange-based weights, the first derivative at the central point x_3 = 0.5 yields an exact approximation of $0 (exact value: \pi \cos(\pi/2) = 0). For the second derivative at the same point, the approximation is approximately -9.8696 (exact: -\pi^2 \sin(\pi/2) \approx -9.8696), demonstrating machine-precision accuracy even with few points.Shu & Richards, 1992 This example highlights the method's ability to capture derivatives accurately on simple grids, extensible to more complex functions.

References

  1. [1]
    [PDF] 5 Numerical Differentiation - UMD MATH
    5 Numerical Differentiation. 5.1 Basic Concepts. This chapter deals with numerical approximations of derivatives. The first questions.
  2. [2]
    [PDF] Numerical Differentiation & Integration - Engineering Mathematics
    In this chapter, we develop ways to approximate the derivatives of function , when only data points are given and also to integrate definite integrals by ...
  3. [3]
    [PDF] Numerical differentiation: finite differences
    Numerical differentiation: finite differences. The derivative of a function f at the point x is defined as the limit of a difference quotient: f. 0. (x) = lim.
  4. [4]
    [PDF] Differentiation - MIT OpenCourseWare
    3 Forward Difference. We first consider arguably the simplest form of numerical differentiation: the forward difference formula. Here, we wish to approximate ...
  5. [5]
    [PDF] Numerical Differentiation
    Feb 8, 2009 · Numerical differentiation uses forward, backward, and central finite-difference approximations to estimate derivatives of a function.
  6. [6]
    Finite Calculus (Calculus of Finite Differences): Definition, Example
    Finite calculus was developed in the 17th century by Gottfried Wilhelm Leibniz and Isaac Newton. It was originally used to solve problems in physics and ...
  7. [7]
    [PDF] Section 4.1 Numerical Differentiation
    𝑟𝑟𝑟𝑟 = 0. Here 𝑟𝑟 is the price of a derivative security, 𝑡𝑡 is time, 𝑆𝑆 is the varying price of the underlying asset, 𝑟𝑟 is the risk-free interest rate,.
  8. [8]
    Lecture 3-1: Forward, backward and central differences for derivatives
    Backward differences are useful for approximating the derivatives if data in the future are not yet available. Moreover, data in the future may depend on the ...
  9. [9]
    Evaluate cos(0.1) - Calculus Examples - Mathway
    The result can be shown in multiple forms. Exact Form: cos(0.1) cos ( 0.1 ). Decimal Form: 0.99500416… 0.99500416 … cos(0.1) c o s ⁡ ( 0 . 1 ) ...
  10. [10]
    [PDF] Numerical Analysis, 9th ed.
    ... Numerical Analysis. NINTH EDITION. Richard L. Burden. Youngstown State University. J. Douglas Faires ... Error Analysis 1. 1.1 Review of Calculus 2. 1.2 Round-off ...
  11. [11]
    [PDF] Finite precision arithmetic
    The truncation error is O(h) and the roundoff error is O(ε/h), where ε ≈ 10−15 in Matlab. Thus the total error is O(h)+O(ε/h). Hence, for large h the ...
  12. [12]
    None
    ### Summary of Round-Off Error in Numerical Differentiation
  13. [13]
    [PDF] Statistical Applications of the Complex-step Method of Numerical ...
    KEY WORDS: Automatic differentiation; finite difference; gradient;. Hessian ... Thus the 'optimal' step size is ε1/3θ for h1, ε1/4θ for h2 and h3, and ...
  14. [14]
  15. [15]
    [PDF] Derivative Approximation by Finite Differences - Geometric Tools
    Feb 1, 2020 · This document shows how to approximate derivatives of functions F : Rn → R using finite differences. The independent variables are x = (x1 ...
  16. [16]
    [PDF] Appendix B. Higher-Order Finite Differences
    B.1. Central-Finite-Difference Expressions of a Function's Derivative With Uniform. Grid Distribution. Let us consider a tabulate function fi = f(xi ) in a ...
  17. [17]
    [PDF] Finite difference formulas in the complex plane
    Oct 25, 2021 · Table 1: Weights for centered FD approximations of the first derivative on a grid with spacing h. (omitting the factor 1/h). order weights. 2. 1.
  18. [18]
    None
    ### Summary of Numerical Differentiation Content
  19. [19]
    [PDF] Contents 1. Introduction 1 2. Finite difference approximations for ...
    Finally, it is a good problem on which a trade-off between the truncation error and the roundoff can be demonstrated.
  20. [20]
    [PDF] 11. Finite Difference Methods for Partial Differential Equations
    May 18, 2008 · Its development has a long and influential history, dating. 5/18/08. 188 c 2008 Peter J. Olver. Page 2. One-Sided Difference. Central Difference.<|control11|><|separator|>
  21. [21]
    [PDF] Brief Summary of Finite Difference Methods
    Table 1.1 shows the lowest order centered FD formulas for the first derivative, and Table 1.2 for the second derivative. The existence of infinite order ...
  22. [22]
    Numerical Differentiation of Analytic Functions
    Jul 14, 2006 · This book presents a broad overview of numerical methods for students and professionals in computationally oriented disciplines who need to ...
  23. [23]
    Using Complex Variables to Estimate Derivatives of Real Functions | SIAM Review
    **Summary of Complex-Step Method from https://epubs.siam.org/doi/10.1137/S003614459631241X**