Fact-checked by Grok 2 weeks ago

Polynomial interpolation

Polynomial interpolation is a fundamental method in numerical analysis for constructing a polynomial of the lowest possible degree that exactly passes through a given set of distinct data points (x_i, y_i) for i = 0, 1, \dots, n, thereby providing an approximation to an underlying continuous function f(x) at those points. This unique interpolating polynomial p_n(x) of degree at most n satisfies p_n(x_i) = y_i = f(x_i) for all i, and its existence and uniqueness are guaranteed by the fundamental theorem of algebra, which ensures that the system of equations defining the polynomial coefficients has a solution. Common techniques for deriving the interpolating polynomial include the Lagrange form, which expresses p_n(x) as a weighted sum \sum_{i=0}^n y_i \ell_i(x), where each basis polynomial \ell_i(x) = \prod_{j \neq i} \frac{x - x_j}{x_i - x_j} equals 1 at x_i and 0 at the other points, and the Newton divided-difference form, which builds the polynomial incrementally using divided differences to facilitate efficient computation and addition of points. These methods originated in the 17th and 18th centuries, with Isaac Newton developing the divided-difference approach around 1675 for applications in finite differences and quadrature, followed by Edward Waring's publication of the Lagrange-like formula in 1779, Leonhard Euler's rediscovery in 1783, and Joseph-Louis Lagrange's formal presentation in 1795 as a simple tool for astronomical computations. The importance of polynomial interpolation lies in its versatility for approximating functions from discrete data, enabling tasks such as numerical differentiation, integration (e.g., via quadrature rules like Simpson's), and solving ordinary differential equations, while also finding applications in computer-aided design, signal processing, and machine learning for curve fitting and data smoothing. However, the approximation's accuracy depends on the error term f(x) - p_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \prod_{i=0}^n (x - x_i) for some \xi between the min and max of x and the x_i, which highlights phenomena like Runge's paradox where high-degree polynomials oscillate wildly for equispaced points, often necessitating clustered nodes like Chebyshev points for stability.

Fundamentals

Definition and Motivation

Polynomial interpolation is a fundamental technique in numerical analysis for constructing a polynomial that exactly matches a given set of data points. Given a collection of n+1 distinct points (x_0, y_0), \dots, (x_n, y_n) where the x_i are distinct real or complex numbers, the task is to determine a polynomial P(x) of degree at most n satisfying P(x_i) = y_i for each i = 0, \dots, n. This polynomial serves as an exact representation at the specified nodes x_i, while providing an approximation to any underlying function generating the y_i values at other points. The foundation of polynomial interpolation rests on the basic properties of polynomials, which are finite sums of non-negative integer powers of the variable, expressed as P(x) = a_0 + a_1 x + a_2 x^2 + \dots + a_n x^n, where the a_k are constant coefficients. Interpolation leverages this structure to achieve precise agreement at the nodes, distinguishing it from general function approximation methods that may only seek closeness without exact matches. The motivation stems from practical needs in computation and science, where discrete observations must be extended to continuous models, such as filling gaps in tabulated data or estimating function values between measurements. Historically, polynomial interpolation emerged from efforts in numerical analysis to facilitate the creation and use of mathematical tables, with early roots in ancient astronomy for predicting celestial positions. The modern development is credited to Isaac Newton, who initiated work on interpolation using finite differences around 1675 to address irregularities in observational data, though his methods were not published until 1711. In the 18th century, Joseph-Louis Lagrange advanced the field by deriving an explicit formula for the interpolating polynomial, published in 1795, building on prior contributions like those of Edward Waring in 1779; this formulation emphasized direct construction from the data points without relying on differences. A simple illustration occurs for n=1, known as linear interpolation, where the polynomial is the straight line connecting two points: P(x) = y_0 + \frac{y_1 - y_0}{x_1 - x_0} (x - x_0). This formula linearly extrapolates between (x_0, y_0) and (x_1, y_1), providing an intuitive entry point to the general case.

Basic Applications

Polynomial interpolation serves as a foundational tool in numerical analysis for approximating integrals and derivatives from discrete data sets. By constructing a polynomial that passes through given points, one can integrate or differentiate this polynomial to estimate the integral or derivative of the underlying function, providing a practical method when analytical forms are unavailable. This approach underpins techniques like Simpson's rule for numerical quadrature, where the interpolating polynomial enables accurate approximation of definite integrals over intervals defined by sample points. In data smoothing and curve fitting, polynomial interpolation generates continuous representations from scattered measurements, essential in engineering and computer graphics for creating smooth plots and visualizations. It provides an exact fit to the data points, suitable for clean datasets to preserve trends and facilitate tasks like trajectory prediction in design software. For noisy data, however, regression-based curve fitting with lower-degree polynomials is preferred to approximate the underlying trend and reduce noise effects. This method contrasts with higher-degree fits by potentially exhibiting fewer oscillations when using appropriate node distributions, making it suitable for initial curve approximations in fields requiring visual or analytical continuity. Polynomial interpolation plays a key role in signal processing by reconstructing continuous signals from discrete samples, such as in audio and image resampling where it estimates values between known points to avoid aliasing. Techniques like Lagrange or finite impulse response filters derived from polynomials enable efficient upsampling, preserving signal integrity in digital systems. In scientific computing, it supports solving differential equations through methods like the method of lines, where spatial derivatives are approximated via interpolation before time-stepping; this adoption surged in the mid-20th century with the rise of electronic computers, enabling numerical simulations in physics and engineering. Polynomial interpolation finds applications in various fields, including meteorology for estimating values from sparse observations, though specialized methods like kriging are also common for spatial data.

Existence and Uniqueness

Interpolation Theorem

The interpolation theorem, also known as the existence and uniqueness theorem for polynomial interpolation, asserts that given n+1 distinct points x_0, x_1, \dots, x_n in the real numbers and arbitrary values y_0, y_1, \dots, y_n, there exists a unique polynomial P(x) of degree at most n such that P(x_i) = y_i for each i = 0, 1, \dots, n. This result guarantees that the interpolation problem is well-posed under these conditions, providing a foundational basis for constructing approximations to functions using polynomials. From a vector space perspective, the space of polynomials of degree at most n over the reals forms a vector space of dimension n+1, with the standard monomial basis \{1, x, x^2, \dots, x^n\}. Interpolating given values at n+1 distinct points corresponds to solving a linear system V \mathbf{c} = \mathbf{y}, where V is the (n+1) \times (n+1) Vandermonde matrix with entries V_{ij} = x_i^{j} (for i,j = 0, \dots, n), \mathbf{c} are the coefficients of the polynomial, and \mathbf{y} = (y_0, \dots, y_n)^T. The Vandermonde matrix is invertible precisely when the x_i are distinct, ensuring a unique solution for \mathbf{c}. This linear algebraic formulation extends naturally to the complex numbers, where the theorem holds analogously for distinct points in \mathbb{C}, as the Vandermonde determinant remains nonzero. Intuitively, the theorem arises because the evaluations at n+1 distinct points define n+1 linearly independent linear functionals on the (n+1)-dimensional space of polynomials of degree at most n, yielding a trivial kernel for the interpolation operator and thus guaranteeing existence and uniqueness.

Proofs of Existence and Uniqueness

The existence and uniqueness of a polynomial interpolant of degree at most n for n+1 distinct points (x_0, y_0), \dots, (x_n, y_n) in \mathbb{R} can be established through both non-constructive and constructive arguments. A non-constructive proof relies on the linear algebra of the vector space of polynomials. The space \mathcal{P}_n of polynomials of degree at most n has dimension n+1, with the monomial basis \{1, x, \dots, x^n\}. The evaluation map from \mathcal{P}_n to \mathbb{R}^{n+1}, which sends a polynomial p to (p(x_0), \dots, p(x_n)), is linear. This map is represented by the (n+1) \times (n+1) Vandermonde matrix V with entries V_{ij} = x_i^{j} (indexing from 0). Since the x_i are distinct, \det(V) = \prod_{0 \leq i < j \leq n} (x_j - x_i) \neq 0, so V is invertible. Thus, the map is bijective, implying there exists a unique p \in \mathcal{P}_n such that p(x_i) = y_i for all i = 0, \dots, n. Uniqueness can also be proven directly without reference to matrices. Suppose p, q \in \mathcal{P}_n both satisfy p(x_i) = y_i = q(x_i) for i = 0, \dots, n. Then r = p - q \in \mathcal{P}_n satisfies r(x_i) = 0 for n+1 distinct points. A nonzero polynomial of degree at most n can have at most n roots (by the fundamental theorem of algebra, as r(x) = c \prod_{i=0}^n (x - x_i) would otherwise require degree at least n+1). Thus, r \equiv 0, so p = q. For a constructive proof of existence, the Lagrange form provides an explicit interpolating polynomial, though its full derivation is omitted here. The basis polynomials l_i(x) (for i = 0, \dots, n) are defined such that each l_i(x_j) = \delta_{ij} (Kronecker delta), ensuring linear independence in \mathcal{P}_n. The interpolant p(x) = \sum_{i=0}^n y_i l_i(x) then satisfies p(x_j) = y_j by construction, and lies in \mathcal{P}_n. Combined with the uniqueness argument above, this yields both existence and uniqueness. An alternative existence proof proceeds by induction on n, yielding a recursive construction. For the base case n=0, the constant polynomial p_0(x) = y_0 uniquely interpolates at the single point (x_0, y_0). Assume existence and uniqueness hold for n = k \geq 0: there is a unique p_k \in \mathcal{P}_k interpolating at (x_0, y_0), \dots, (x_k, y_k). For n = k+1, let p_k^{(0)} interpolate the first k+1 points (x_0, y_0), \dots, (x_k, y_k), and p_k^{(1)} interpolate the last k+1 points (x_1, y_1), \dots, (x_{k+1}, y_{k+1}). Define p_{k+1}(x) = \frac{x - x_0}{x_{k+1} - x_0} p_k^{(1)}(x) + \frac{x_{k+1} - x}{x_{k+1} - x_0} p_k^{(0)}(x). This is in \mathcal{P}_{k+1}, and direct substitution shows p_{k+1}(x_i) = y_i for i = 0, \dots, k+1 (using the induction hypothesis at intermediate points). Uniqueness follows from the earlier root argument. By induction, existence and uniqueness hold for all n.

Uniqueness Corollary

A key consequence of the uniqueness theorem for polynomial interpolation is that the interpolating polynomial has degree at most n for n+1 distinct points, as any two such polynomials agreeing at those points must be identical. Furthermore, if the given data points arise from a polynomial of degree strictly less than n, the unique interpolant of degree at most n coincides exactly with the original polynomial. In the context of Hermite interpolation, where data includes both function values and derivatives at the nodes, uniqueness holds for the polynomial of appropriate degree satisfying these higher-order conditions, provided the total number of conditions matches the degree plus one. The uniqueness of the interpolating polynomial underpins the stability of numerical representations like the barycentric form, which leverages this property to enable efficient and robust evaluation without explicit coefficient computation, particularly for well-distributed nodes such as Chebyshev points. Extending to multiple variables, uniqueness can be achieved via tensor products of univariate interpolants over product grids, yielding a unique polynomial in the tensor-product space; however, for spaces of total degree at most n, uniqueness is more restricted and requires carefully poised point sets to avoid singularities.

Construction Methods

Lagrange Polynomial

The Lagrange form provides an explicit expression for the unique polynomial of degree at most n that interpolates n+1 given distinct points (x_0, y_0), \dots, (x_n, y_n) in the plane. It is constructed as P(x) = \sum_{i=0}^n y_i \ell_i(x), where the Lagrange basis polynomials \ell_i(x) are defined by \ell_i(x) = \prod_{\substack{j=0 \\ j \neq i}}^n \frac{x - x_j}{x_i - x_j}. Each basis polynomial \ell_i(x) is of degree n and possesses the key property that \ell_i(x_k) = \delta_{ik} for k = 0, \dots, n, where \delta_{ik} is the Kronecker delta (equal to 1 if i = k and 0 otherwise). This ensures that P(x_k) = y_k for each interpolation point, while \ell_i(x) vanishes at all other nodes x_j with j \neq i. A primary advantage of the Lagrange form is its direct construction, which requires no solution of linear systems or prior computation of coefficients, making it particularly intuitive for theoretical purposes such as proving the existence of an interpolating polynomial. To illustrate, consider quadratic interpolation through the points (0, 0), (1, 1), and (2, 4). The basis polynomials are \ell_0(x) = \frac{(x-1)(x-2)}{(0-1)(0-2)} = \frac{(x-1)(x-2)}{2}, \ell_1(x) = \frac{(x-0)(x-2)}{(1-0)(1-2)} = \frac{x(x-2)}{-1} = -x(x-2), \ell_2(x) = \frac{(x-0)(x-1)}{(2-0)(2-1)} = \frac{x(x-1)}{2}. Thus, P(x) = 0 \cdot \ell_0(x) + 1 \cdot \ell_1(x) + 4 \cdot \ell_2(x) = -x(x-2) + 2x(x-1) = x^2, which matches the underlying quadratic function. Evaluating P(x) at a new point via the Lagrange form incurs a computational cost of O(n^2) arithmetic operations, as each of the n+1 basis polynomials requires O(n) products.

Newton Polynomial

The Newton polynomial provides an alternative representation of the interpolating polynomial, expressed in a nested or hierarchical form that facilitates efficient computation, particularly when data points are added incrementally. The polynomial P_n(x) of degree at most n that interpolates the function values f(x_i) = y_i at distinct nodes x_0, x_1, \dots, x_n is given by P_n(x) = a_0 + a_1 (x - x_0) + a_2 (x - x_0)(x - x_1) + \cdots + a_n \prod_{i=0}^{n-1} (x - x_i), where the coefficients a_k = f[x_0, x_1, \dots, x_k] are the divided differences of order k. This form, also known as Newton's divided difference interpolation formula, leverages a recursive structure for the coefficients, making it distinct from the direct product form in Lagrange interpolation. Divided differences are defined recursively to compute these coefficients. The zeroth-order divided difference is simply the function value: f[x_i] = y_i. Higher-order divided differences are then obtained via f[x_i, x_{i+1}, \dots, x_{i+k}] = \frac{f[x_{i+1}, \dots, x_{i+k}] - f[x_i, \dots, x_{i+k-1}]}{x_{i+k} - x_i} for k \geq 1, assuming x_{i+k} \neq x_i. These are typically organized into a divided difference table, where each entry in subsequent columns is calculated from the differences of the previous column divided by the node spacing. The leading diagonal or first row of this table provides the coefficients a_k. For smooth functions, the divided differences of order k approximate the k-th derivative scaled by k!, revealing information about the function's higher-order behavior. A key advantage of the Newton form is its computational efficiency for updating the interpolant when a new data point is added, as only the highest-order divided differences need recalculation without recomputing prior terms. This hierarchical structure also supports adaptive interpolation strategies and provides insight into the function's smoothness through the magnitude of higher-order differences. To illustrate, consider interpolating the points (0, 0), (1, 1), (2, 8), and (3, 27), which correspond to f(x) = x^3. The divided difference table is constructed as follows:
x_if[x_i]First-orderSecond-orderThird-order
00
1
113
71
286
19
327
The first-order differences are f[0,1] = \frac{1-0}{1-0} = 1, f[1,2] = \frac{8-1}{2-1} = 7, f[2,3] = \frac{27-8}{3-2} = 19. The second-order differences are f[0,1,2] = \frac{7-1}{2-0} = 3, f[1,2,3] = \frac{19-7}{3-1} = 6. The third-order difference is f[0,1,2,3] = \frac{6-3}{3-0} = 1. The resulting Newton polynomial is P_3(x) = 0 + 1(x - 0) + 3(x - 0)(x - 1) + 1(x - 0)(x - 1)(x - 2) = x^3, which exactly matches the underlying function. For equally spaced nodes, variants such as the Newton forward difference and backward difference forms adapt the divided difference approach using binomial coefficients, simplifying computations in tabular data scenarios.

Finite Difference Methods

Finite difference methods provide an efficient approach to polynomial interpolation when the interpolation points are equally spaced, leveraging discrete analogs of derivatives known as finite differences. These methods are particularly useful for tabulated data on uniform grids, as they allow the construction of the interpolating polynomial through iterative differences rather than direct computation of coefficients. The core idea relies on the forward difference operator Δ, defined as Δy_i = y_{i+1} - y_i for data points y_i at x_i = x_0 + i h, where h is the constant spacing, and higher-order differences Δ^k y_i obtained by repeated application: Δ^{k} y_i = Δ^{k-1} y_{i+1} - Δ^{k-1} y_i. Similarly, the backward difference operator ∇ is defined as ∇y_i = y_i - y_{i-1}, with higher orders ∇^k y_i = ∇^{k-1} y_i - ∇^{k-1} y_{i-1}. Difference tables facilitate the computation of these operators by organizing the data in a tabular format, where each level computes successive differences from the previous. For a set of n+1 points, the table begins with the y-values in the zeroth column, followed by columns for Δy, Δ²y, and so on, up to the nth order, forming a triangular array that reveals patterns, such as constant differences for polynomials of degree n. The backward table is constructed analogously, starting differences from the rightmost point. These tables enable quick evaluation of higher-order differences needed for interpolation without redundant calculations. The Newton forward difference formula expresses the interpolating polynomial using forward differences from the initial point x_0, with parameter s = (x - x_0)/h: P_n(x) = \sum_{k=0}^n \binom{s}{k} \Delta^k y_0, where \binom{s}{k} = \frac{s(s-1)\cdots(s-k+1)}{k!} is the generalized binomial coefficient. This form is derived from the general Newton series and is exact for polynomials of degree at most n. For interpolation near the start of the table (small s > 0), forward differences minimize error propagation due to rounding. The Newton backward difference formula, suited for interpolation near the end of the table, uses backward differences from x_n with parameter s = (x - x_n)/h: P_n(x) = \sum_{k=0}^n \binom{s}{k} \nabla^k y_n. Here, the same binomial form applies, but s is typically negative for points before x_n, and the formula converges well in that regime. Both formulas yield the unique interpolating polynomial of degree at most n. For points near the middle of the table, central difference variants offer better numerical stability by centering the differences. The central difference operator is defined as δ y_i = y_{i+1/2} - y_{i-1/2}, where the half-integer points are handled via the difference table (e.g., first-order central differences are computed as δ y_i = \frac{1}{2} (Δ y_{i-1} + Δ y_i ) for the averaged forward form). The Gauss forward formula uses central differences centered just before the interpolation point, taking the form P_n(x) = y_0 + s \delta y_{1/2} + \frac{s(s-1)}{2!} \delta^2 y_0 + \frac{s(s-1)(s+1)}{3!} \delta^3 y_{1/2} + \frac{s(s-1)(s+1)(s-2)(s+2)}{4!} \delta^4 y_0 + \cdots, with s = (x - x_0)/h. The Gauss backward formula mirrors this from the opposite end. Stirling's formula combines the Gauss forward and backward forms as their arithmetic mean, providing a central difference interpolation suitable for even-order approximations: P_n(x) = y_0 + s \delta y_0 + \frac{s^2}{2!} \delta^2 y_0 + \frac{s(s^2 - 1)}{3!} \delta^3 y_0 + \frac{s^2 (s^2 - 1)}{4!} \delta^4 y_0 + \cdots, where δ^k y_i are central differences. This variant excels for symmetric evaluations around the table's center. Bessel's formula, another central variant, averages Gauss forward and backward but weights terms for midpoint evaluations, using half-integer shifts in the differences: P_n(x) = y_0 + \frac{s}{2} (\delta y_{-1/2} + \delta y_{1/2}) + \frac{s^2 - \frac{1}{4}}{2!} \delta^2 y_0 + \frac{s}{2} (s^2 - 1) (\delta^3 y_{-1/2} + \delta^3 y_{1/2}) / 3! + \cdots. It is particularly effective when the interpolation point aligns closely with midpoints between nodes. A lozenge diagram visually represents the computation of differences in these tables, depicting the data points and successive differences as a lattice of parallelograms (lozenges), where diagonal paths correspond to specific interpolation paths like forward, backward, or central. This geometric aid highlights how different formulas select entries from the table. When the spacing h = 1, the finite difference formulas reduce to the general Newton divided difference form, as the kth forward difference Δ^k y_0 = k! f[x_0, x_1, \dots, x_k], linking uniform-grid methods to the arbitrary-point case via the relation between finite and divided differences. A proof follows by induction on the order, verifying that repeated differencing scales the divided differences by the factorial and spacing.

Matrix-Based Methods

Matrix-based methods for polynomial interpolation formulate the problem as a linear system to find the coefficients of the interpolating polynomial in the monomial basis p(x) = \sum_{j=0}^n c_j x^j, where the polynomial passes through given points (x_i, y_i) for i = 0, \dots, n with distinct x_i. This leads to the system V \mathbf{c} = \mathbf{y}, where \mathbf{y} = [y_0, \dots, y_n]^T, \mathbf{c} = [c_0, \dots, c_n]^T, and the Vandermonde matrix V has entries V_{i,j} = x_i^j for i,j = 0, \dots, n. The matrix V is invertible under the distinct points condition, guaranteeing a unique solution for \mathbf{c}. Solving this system directly via Gaussian elimination requires O(n^3) operations, but the Vandermonde structure admits specialized O(n^2) algorithms, such as the Björck–Pereyra algorithm for the transposed system. However, these matrices are notoriously ill-conditioned, with the condition number growing exponentially with n, often on the order of $2^n or worse for equispaced points, leading to severe numerical instability in floating-point arithmetic even for moderate n (e.g., n > 20). This instability arises because high powers in the columns amplify rounding errors, causing loss of accuracy or spurious oscillations in the computed polynomial, particularly when points are clustered or spread unevenly. To mitigate these issues without relying on the monomial basis, non-Vandermonde approaches like barycentric Lagrange interpolation provide stable alternatives for evaluation. In this method, the interpolant is expressed as p(x) = \frac{\sum_{j=0}^n \frac{w_j y_j}{x - x_j}}{\sum_{j=0}^n \frac{w_j}{x - x_j}}, where the barycentric weights are w_j = \frac{1}{\prod_{k \neq j} (x_j - x_k)}, computed in O(n^2) time. This formulation avoids explicit matrix inversion and is forward stable for well-chosen nodes (e.g., Chebyshev points), with each evaluation costing O(n) operations after preprocessing, making it suitable for multiple evaluations. For large-scale interpolation with thousands of points, fast hierarchical methods exploit low-rank structure in the Vandermonde matrix to achieve near-linear complexity. Fast multipole methods, adapted for one-dimensional problems, approximate the interpolation operator using multipole expansions, reducing the cost from O(n^2) to O(n \log n) or even O(n) for evaluation at m points when m \approx n. These post-2000 developments, often combined with barycentric forms, enable stable large-scale computations in applications like spectral methods. As an illustrative example, consider interpolating the points (0, 0), (1, 1), (2, 4) with a quadratic polynomial, corresponding to y = x^2. The Vandermonde matrix is V = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 1 & 2 & 4 \end{bmatrix}, and \mathbf{y} = [0, 1, 4]^T. Solving V \mathbf{c} = \mathbf{y} yields \mathbf{c} = [0, 0, 1]^T, confirming p(x) = x^2. For small n=2, the condition number is moderate (\kappa(V) \approx 4.5), but it grows rapidly for larger n.

Error and Approximation

Lagrange Error Formula

In polynomial interpolation, the error between a sufficiently smooth function f and its interpolating polynomial P_n of degree at most n at distinct nodes x_0, x_1, \dots, x_n in the interval [a, b] is given by the Lagrange error formula. For f \in C^{n+1}[a, b], the error at a point x \in [a, b] satisfies f(x) - P_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \omega_{n+1}(x), where \omega_{n+1}(x) = \prod_{k=0}^n (x - x_k) is the nodal polynomial, and \xi is some point in the smallest closed interval containing x and the nodes \{x_k\}_{k=0}^n. This formula arises from considering an auxiliary function g(t) = f(t) - P_n(t) - K \prod_{k=0}^n (t - x_k), where the constant K is chosen such that g(x) = 0. The function g vanishes at x and at each interpolation node x_k, yielding n+2 roots. Repeated application of Rolle's theorem to g and its derivatives implies that g^{(n+1)}(\xi) = 0 for some \xi in the relevant interval, which simplifies to the error expression above. A bound on the error follows directly from the formula: assuming f^{(n+1)} is continuous on [a, b], |f(x) - P_n(x)| \leq \frac{\|f^{(n+1)}\|_\infty}{(n+1)!} \max_{x \in [a, b]} |\omega_{n+1}(x)|, where \|f^{(n+1)}\|_\infty = \max_{t \in [a, b]} |f^{(n+1)}(t)|. This highlights that the error depends on the smoothness of f via its (n+1)-th derivative and on the distribution of the nodes through the magnitude of the nodal polynomial. To minimize the error bound, the nodes should be chosen to reduce \max |\omega_{n+1}(x)|. On the interval [-1, 1], the Chebyshev nodes x_k = \cos\left(\frac{(2k+1)\pi}{2(n+1)}\right) for k = 0, \dots, n achieve this minimization, yielding \max |\omega_{n+1}(x)| = 2^{-n}. This choice significantly dampens interpolation errors compared to equispaced nodes, especially for higher degrees. For example, consider interpolating f(x) = \sin x on [0, 0.8] using the degree-4 Lagrange polynomial at equispaced nodes \{0, 0.2, 0.4, 0.6, 0.8\}. The approximation at x = 0.28 is P_4(0.28) \approx 0.2763591, while \sin(0.28) \approx 0.2763556, giving an actual error of about $3.5 \times 10^{-6}. The error bound is \leq 3.7 \times 10^{-6}, computed using \|f^{(5)}\|_\infty = 1 (since f^{(5)}(x) = -\cos x) and \max |\omega_5(x)| \approx 4.44 \times 10^{-4} over the interval.

Proof of Error Formula

To derive the error formula for polynomial interpolation, assume that the function f is continuously differentiable n+1 times on an interval containing the distinct interpolation nodes x_0, x_1, \dots, x_n and the evaluation point x \notin \{x_0, \dots, x_n\}, where P is the unique interpolating polynomial of degree at most n such that P(x_i) = f(x_i) for i = 0, \dots, n. Consider the auxiliary function g(t) = f(t) - P(t) - \lambda \omega(t), where \omega(t) = \prod_{i=0}^n (t - x_i) is the monic polynomial of degree n+1 vanishing at the nodes, and the constant \lambda is chosen such that g(x) = 0. This implies \lambda = \frac{f(x) - P(x)}{\omega(x)}. By construction, g(t) vanishes at the n+1 nodes x_0, \dots, x_n and also at x, yielding n+2 distinct roots. Applying Rolle's theorem repeatedly to g(t) between its consecutive roots shows that the first derivative g'(t) has at least n+1 roots, the second derivative g''(t) has at least n roots, and continuing this process, the (n+1)-th derivative g^{(n+1)}(t) has at least one root \xi in the smallest interval containing all the roots of g. Differentiating g(t) n+1 times gives g^{(n+1)}(t) = f^{(n+1)}(t) - P^{(n+1)}(t) - \lambda \omega^{(n+1)}(t). Since \deg P \leq n, it follows that P^{(n+1)}(t) = 0. Moreover, as \omega(t) is monic of degree n+1, its (n+1)-th derivative is the constant (n+1)!. Thus, g^{(n+1)}(t) = f^{(n+1)}(t) - \lambda (n+1)!. Evaluating at the root \xi yields g^{(n+1)}(\xi) = 0, so \lambda = \frac{f^{(n+1)}(\xi)}{(n+1)!}. Equating the two expressions for \lambda produces the error formula: f(x) - P(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \omega(x), where \xi lies in the interval spanning the nodes and x. Under the assumption that f is analytic, the result extends to the complex plane via Cauchy's integral formula applied to a suitable contour enclosing the nodes and x, yielding an analogous remainder involving a contour integral of f^{(n+1)}. This error analysis originates from Joseph-Louis Lagrange's work in 1795, where he introduced the interpolation formula and its remainder term in the context of analytic functions.

Error for Equally Spaced Points

When interpolating a function f at equally spaced points, the error term in the Lagrange form specializes to highlight the role of the node distribution. The general remainder, as derived previously, is f(x) - p_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \omega(x) for some \xi between the min and max of x and the nodes, where \omega(x) = \prod_{i=0}^n (x - x_i). For equally spaced nodes on the interval [-1, 1] with spacing h = 2/n, the magnitude of \omega(x) decreases exponentially with n, but the uniform distribution leads to poorer performance compared to clustered nodes. In the Newton forward difference formulation, suitable for equispaced points, the interpolation polynomial uses divided differences that simplify to forward differences \Delta^k f(x_0). The error remainder is then expressed as f(x) - p_n(x) = \frac{\Delta^{n+1} f(\eta)}{(n+1)! h^{n+1}} \prod_{i=0}^n (x - x_i) for some \eta in the interval, where h is the uniform spacing. Since |\Delta^{n+1} f(\eta)| \leq h^{n+1} \max |f^{(n+1)}|, the error bound incorporates the scaled (n+1)th difference divided by h^{n+1} (n+1)!, emphasizing how uniform spacing exacerbates sensitivity to higher-order variations in f. High-degree polynomial interpolation on equally spaced grids often exhibits instability, amplifying oscillations particularly near the interval endpoints. For instance, attempting to approximate functions like $1/(1 + 25x^2) with degrees exceeding 10 on uniform points in [-1, 1] results in wild oscillations, with error magnitudes increasing by orders of magnitude compared to low-degree fits. This instability stems from the ill-conditioned Vandermonde matrix underlying the interpolation, where small perturbations in data lead to large swings in the polynomial coefficients. To mitigate these issues, practitioners recommend limiting the polynomial degree to 5 or lower for equispaced data or employing node clustering near the endpoints to reduce the growth of the Lebesgue constant. Clustered distributions, such as those inspired by Chebyshev but adapted for uniformity constraints, help dampen endpoint oscillations without fully abandoning equal spacing. Compared to Chebyshev spacing, where \max |\omega(x)| \leq 2^{-n}, the equispaced case has a larger \max |\omega(x)| (decreasing as \sim (2/e)^n), resulting in a looser derivative-based error bound. However, the primary issue with equispaced nodes is the exponential growth of the Lebesgue constant, leading to slower convergence and potential divergence for functions analytic only in small Bernstein ellipses.

Lebesgue Constants

In polynomial interpolation, the Lebesgue function associated with a set of n+1 distinct nodes x_0, x_1, \dots, x_n in an interval, say [-1, 1], is defined as \lambda_n(x) = \sum_{i=0}^n |\ell_i(x)|, where \ell_i(x) are the Lagrange basis polynomials satisfying \ell_i(x_j) = \delta_{ij}. The Lebesgue constant \Lambda_n is then the maximum value of this function over the interval, \Lambda_n = \max_{x \in [-1,1]} \lambda_n(x). This constant quantifies the worst-case amplification of errors in the interpolation process and serves as the operator norm of the interpolation projector onto the space of polynomials of degree at most n. The Lebesgue constant provides a key error bound for the uniform norm of the interpolation error. Specifically, for a continuous function f on [-1,1] and its interpolant P_n of degree at most n, the error satisfies \|f - P_n\|_\infty \leq (1 + \Lambda_n) \dist(f, \Pi_n), where \Pi_n denotes the space of polynomials of degree at most n and \dist(f, \Pi_n) = \inf_{p \in \Pi_n} \|f - p\|_\infty is the best uniform approximation error. This inequality highlights how \Lambda_n measures the stability of the interpolation operator relative to the optimal polynomial approximation. The growth of \Lambda_n depends critically on the choice of nodes. For equispaced nodes, the constant exhibits exponential growth, asymptotically \Lambda_n \sim \frac{2^{n+1}}{e n \log n} as n \to \infty, which underscores the instability of interpolation at such points for large n. In contrast, for Chebyshev nodes (the roots or extrema of the Chebyshev polynomial of the first kind), the growth is much more favorable, asymptotically \Lambda_n \sim \frac{2}{\pi} \log n + O(1), making these nodes preferable for stable high-degree interpolation. Computing \Lambda_n for a given set of nodes typically involves numerical evaluation of the Lebesgue function, often by sampling \lambda_n(x) densely over the interval and taking its maximum, or using optimization techniques to locate the exact maximum. Such computations are essential for assessing node quality in practice, as smaller \Lambda_n implies better-conditioned interpolation problems. The implications of Lebesgue constants extend to numerical stability and the selection of optimal nodes. A large \Lambda_n bounds the potential ill-conditioning of the interpolation process, amplifying small perturbations in function values or nodes into large errors in the interpolant. Consequently, minimizing \Lambda_n guides the design of node distributions; Chebyshev nodes achieve near-optimal growth, and further refinements like extended or projected Chebyshev nodes can yield even smaller constants while preserving logarithmic asymptotics.

Convergence and Limitations

Convergence Properties

Polynomial interpolation converges uniformly to the underlying function f under specific conditions on the function's regularity and the choice of interpolation nodes. For analytic functions f on compact sets, such as closed intervals containing the interpolation domain, the use of Chebyshev nodes ensures exponential convergence of the interpolating polynomial as the degree n increases. This rapid convergence arises because the Chebyshev nodes, defined as x_k = \cos\left(\frac{(2k+1)\pi}{2(n+1)}\right) for k = 0, \dots, n, minimize the maximum deviation in the interpolation process for analytic functions, leading to error estimates of the form \|f - P_n\|_\infty = O(\rho^{-n}) where \rho > 1 depends on the size of the Bernstein ellipse enclosing the domain of analyticity. In contrast, for merely continuous functions on [-1, 1], convergence of Lagrange interpolation at equispaced nodes is not guaranteed and can fail dramatically, even though the Weierstrass approximation theorem ensures that polynomials are dense in the space of continuous functions. A seminal counterexample, due to Bernstein, demonstrates that the Lagrange interpolants to the continuous but non-smooth function f(x) = |x| at equispaced nodes diverge pointwise everywhere except at x = -1, 0, 1. This divergence highlights the sensitivity of equispaced interpolation to the function's lack of smoothness, such as the kink at the origin, underscoring that uniform continuity alone is insufficient for convergence in this nodal system. For general nodal systems, convergence holds if the nodes are sufficiently dense in the interval and the function f satisfies a suitable modulus of continuity, \omega(f, \delta), which quantifies the function's regularity by measuring the supremum of |f(t) - f(s)| over |t - s| \leq \delta. Specifically, if the nodes ensure a bounded or slowly growing Lebesgue constant and the modulus of continuity decays appropriately (e.g., \omega(f, \delta) = O(\delta^\alpha) for \alpha > 0), the interpolating polynomials converge uniformly to f. This condition allows interpolation to succeed for Hölder continuous functions, where the nodes' distribution prevents large oscillations. The rate of convergence in polynomial interpolation is intrinsically linked to the best uniform approximation error E_n(f) = \inf_{p \in \Pi_n} \|f - p\|_\infty, where \Pi_n denotes polynomials of degree at most n. For any interpolating polynomial P_n(f) at nodes \{x_0, \dots, x_n\}, the error satisfies E_n(f) \leq \|f - P_n(f)\|_\infty \leq (1 + \Lambda_n) E_n(f), with \Lambda_n = \max_{x \in [a,b]} \sum_{k=0}^n |\ell_k(x)| being the Lebesgue constant for the nodal system. This bound shows that the interpolation error is controlled by the best approximation but amplified by the stability factor \Lambda_n, which grows logarithmically for Chebyshev nodes (facilitating fast convergence) and exponentially for equispaced nodes (often leading to instability).

Runge Phenomenon

The Runge phenomenon illustrates the failure of polynomial interpolation to converge when using high-degree polynomials on equispaced nodes over a fixed interval, manifesting as pronounced oscillations near the endpoints. This behavior was discovered by Carl David Tolmé Runge in 1901 during his study of empirical functions and interpolation between equidistant points, where he demonstrated that increasing the degree of the interpolating polynomial can lead to divergence rather than improved approximation for certain smooth functions. A canonical example involves interpolating the Runge function f(x) = \frac{1}{1 + 25x^2} on the interval [-1, 1] using equispaced nodes with polynomial degrees up to n = 20. This function, a scaled version of the Witch of Agnesi, is analytic and smooth within the interval but develops large oscillations in the interpolant near x = \pm 1 as n grows, with error peaks exceeding the function's maximum value of 1 and reaching amplitudes greater than 5 for n > 10. The phenomenon stems from the rapid growth of the nodal polynomial \omega(x) = \prod_{k=0}^n (x - x_k) in magnitude near the endpoints for equispaced nodes x_k = -1 + 2k/n, which amplifies the interpolation error term |f(x) - P_n(x)| \leq \frac{\max |\f^{(n+1)}(\xi)|}{(n+1)!} |\omega(x)|. The poles of f at x = \pm i/5, located close to the real interval in the complex plane, cause the higher derivatives f^{(n+1)} to grow factorially, exacerbating the divergence despite the function's smoothness on [-1, 1]. Visually, the interpolants display a Gibbs-like ringing pattern, with alternating overshoots and undershoots intensifying at the boundaries, as seen in numerical plots where low-degree approximations (e.g., n = 5) remain accurate while higher degrees (e.g., n = 15) produce wild fluctuations unrelated to the original function's gentle decay. To address this issue, employing Chebyshev nodes, which cluster toward the endpoints following the density \propto 1/\sqrt{1 - x^2}, redistributes the oscillations inward and restores convergence for large n.

Modern Extensions and Alternatives

Barycentric interpolation provides a numerically stable reformulation of the Lagrange basis, enabling efficient evaluation of the interpolating polynomial. The second form of the Lagrange interpolant uses precomputed barycentric weights defined by w_i = \frac{1}{\prod_{j \neq i} (x_i - x_j)}, where the evaluation at a point x is given by p(x) = \sum_{i=0}^n \frac{w_i}{x - x_i} y_i \Big/ \sum_{i=0}^n \frac{w_i}{x - x_i}. This approach achieves O(n) complexity per evaluation point and avoids explicit computation of high-degree polynomial coefficients, reducing ill-conditioning issues inherent in the classical Lagrange form. Although roots trace to the 1930s, its modern advocacy as a standard method stems from advancements in the early 2000s that highlighted its robustness for Chebyshev and other node distributions. Piecewise polynomial interpolation, exemplified by splines, addresses the limitations of global high-degree polynomials by constructing low-degree polynomials on subintervals (spans between knots) with smoothness constraints at the junctions. Cubic splines, a common choice, employ piecewise cubics ensuring continuity of the function, first derivative, and second derivative, which yields a unique interpolant for given data while providing local support—changes in one subinterval minimally affect others. This local control prevents the oscillatory artifacts of global interpolation, particularly for large numbers of points. The foundational theory for spline interpolation was developed in the mid-20th century, with B-splines introduced as a stable basis for computing these piecewise functions, facilitating efficient algorithms for knot insertion and degree elevation. Radial basis function (RBF) methods extend interpolation to irregularly spaced (scattered) data in one or more dimensions, bypassing the need for structured grids required by polynomials. The interpolant takes the form p(\mathbf{x}) = \sum_{i=1}^n \lambda_i \phi(\|\mathbf{x} - \mathbf{x}_i\|), where \phi is a radial function such as the multiquadric \phi(r) = \sqrt{1 + r^2}, and coefficients \lambda_i solve a linear system. RBFs offer flexibility for high-dimensional problems and exact reproduction of data, with convergence rates depending on the smoothness of \phi and data distribution. The existence and uniqueness of solutions for scattered data were rigorously established for conditionally positive definite RBFs, enabling their use in applications like surface reconstruction from point clouds. Modern software tools like the Chebfun toolbox, initiated in the early 2000s, facilitate near-infinite-precision polynomial approximation by representing functions via truncated Chebyshev series on adaptive intervals, treating continuous functions analogously to discrete vectors in numerical environments. This system supports operations like differentiation and integration to machine precision, effectively extending classical polynomial interpolation to practical computing with smooth or piecewise smooth functions. Links to machine learning have surfaced in the past decade, particularly through the neural tangent kernel (NTK) framework, where training wide neural networks in the infinite-width limit equates to kernel regression in a reproducing kernel Hilbert space. This interprets network outputs as interpolants in a feature space that generalizes polynomial kernels, offering nonlinear extensions with provable generalization bounds for overparameterized regimes. In contrast to equispaced polynomial interpolation, where Lebesgue constants grow exponentially with degree—exacerbating the Runge phenomenon—alternatives like splines and RBFs exhibit logarithmic growth, O(\log n), ensuring stable approximation even for high-dimensional or large-scale data.

References

  1. [1]
    Interpolation -- from Wolfram MathWorld
    ### Summary of Interpolation from Wolfram MathWorld
  2. [2]
    [PDF] Introduction to Numerical Analysis, Lecture 3 - MIT OpenCourseWare
    Interpolation is the problem of fitting a smooth curve through a given set of points, generally as the graph of a function. It is useful at least in data ...
  3. [3]
    Lagrange Interpolating Polynomial -- from Wolfram MathWorld
    The formula was first published by Waring (1779), rediscovered by Euler in 1783, and published by Lagrange in 1795 (Jeffreys and Jeffreys 1988).
  4. [4]
    Newton's Divided Difference Interpolation Formula
    "Newton's Formula for Unequal Intervals." §13 in The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed. New York: Dover, pp. 24-26, 1967.Missing: definition | Show results with:definition
  5. [5]
    [PDF] W. Gautschi INTERPOLATION BEFORE AND AFTER LAGRANGE
    Abstract. A brief history of interpolation is given and selected aspects thereof, centered around Lagrange's contribution to the subject. The idea and practice ...
  6. [6]
    [PDF] Polynomial Interpolation - cs.wisc.edu
    Feb 2, 2007 · Definition 1.1. Πn = {all polynomials in one variable of degree ≤ n}. Using this definition, we can define the polynomial interpolation problem ...
  7. [7]
    [PDF] History of Interpolation Text Book Notes Interpolation
    Jun 6, 2006 · Two of the methods of interpolation taught at the HNMI are credited to Newton and. Lagrange. Newton began his work on the subject in 1675, which ...
  8. [8]
    A Chronology of Interpolation - ImageScience.Org
    1795: In apparent ignorance of the works of Waring and Euler, Lagrange publishes the formula now known under his name. 1812: Gauss gives a lecture on ...
  9. [9]
    [PDF] Polynomial Interpolation
    This form is also recommended for it numerical stability. An important and somewhat surprising application of polynomial interpolation was discovered.
  10. [10]
    (PDF) Polynomial Interpolation and Numerical Integration : A short ...
    Jul 25, 2021 · By combining Lagrange interpolation, we propose a specific method for approximating the measure function and analyze the convergence order.
  11. [11]
    [PDF] Numerical Differentiation & Integration - Engineering Mathematics
    To differentiate a function numerically, we first determine an interpolating polynomial and then compute the approximate derivative at the given point. If 's ...
  12. [12]
    Interpolation with Curve Fitting Toolbox - MATLAB & Simulink
    Linear interpolation. This method fits a different linear polynomial between each pair of data points for curves, or between sets of three points for surfaces.
  13. [13]
    Interpolation and curve fitting - ESE Jupyter Material
    In both curve fitting and interpolation, you begin with some discrete data points. With interpolation, you seek a function that typically goes through all of ...
  14. [14]
    A digital signal processing approach to interpolation - IEEE Xplore
    It is shown that linear interpolation and classical polynomial interpolation correspond to the use of the FIR interpolation filter. The use of classical ...
  15. [15]
    (PDF) Polynomial-Based Interpolation for Digital Signal Processing ...
    This paper introduces a generalized design method for polynomial-based interpolation filters. These filters can be implemented by using a modified Farrow ...
  16. [16]
    Chapter 10: Polynomial Interpolation - SIAM.org
    Polynomial interpolants are rarely the end product of a numerical process. Their importance is more as building blocks for other, more complex algorithms.
  17. [17]
    [PDF] A Chronology of Interpolation: From Ancient Astronomy to Modern ...
    Abstract—This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date.
  18. [18]
    [PDF] Interpolation of temperature data for improved weather forecasts
    Jun 6, 2018 · To produce weather forecasts at SMHI, temperature data obtained from observation stations needs to be interpolated to a grid with 2.5 km between ...
  19. [19]
    [PDF] Polynomial Interpolation: Summary ∏ ∏ ∏
    • Existence and Uniqueness Theorem. If x0,x1,...,xn are n + 1 distinct real numbers, then for arbitrary values y0,y1,...,yn, there is a unique polynomial pn ...
  20. [20]
    [PDF] Polynomial interpolation
    Feb 11, 2022 · 2.4 Existence and uniqueness of interpolation polynomials. We have already proved the existence of such polynomials, simply by constructing them ...
  21. [21]
    [PDF] Existence and Uniqueness of interpolating polynomial in Pd
    The Polynomial Interpolation Problem: For polynomial interpolation, we use V = Pd and the variable t. We also may specify the data points by choosing.
  22. [22]
    [PDF] Notes on Polynomial Interpolation
    Mar 19, 2003 · Theorem 4 (Existence and uniqueness) If the nodes are distinct, then there exists a unique polynomial p(x) in PN−1, with N = n j=1. (mj + 1) ...
  23. [23]
    [PDF] Lecture 8
    Third proof of Existence and Uniqueness of Interpolating Polynomial. This proof is based on a recursive formula and an induction argument. So we will need a ...
  24. [24]
    [PDF] 1 Interpolation and approximation
    Aug 9, 2016 · Corollary 2 If two polynomials of degree at most n agree at n + 1 different points, they are identical. Proof: Let p and q be polynomials of ...
  25. [25]
    9.1. Polynomial interpolation - Toby Driscoll
    Theoretically, we can always construct an interpolating polynomial, and the result is unique among polynomials whose degree is less than n + 1 . Theorem 9.1.1.
  26. [26]
    [PDF] Hermite Interpolation - UCSD Math
    The Hermite interpolation problem has got a unique solution. Proof. The idea is the following: we use a modification of the Newton basis for. Lagrange ...
  27. [27]
    [PDF] Barycentric Lagrange Interpolation - People
    Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable. It deserves to be known as the standard method of ...<|separator|>
  28. [28]
    [PDF] Polynomial interpolation in several variables
    Tensor product interpolation is the oldest method of extending the univari- ate theory. The interpolation points and space are obtained by tensor product of the ...<|control11|><|separator|>
  29. [29]
    [PDF] Interpolation, Approximation and Their Applications
    The Lagrange approach is useful in analysis. For example, we have shown the existence of a polynomial interpolating the data at distinct nodes. We have some ...
  30. [30]
    Lagrange Basis Polynomial - an overview | ScienceDirect Topics
    You will notice that by construction, P i ( x ) has the property that P i ( x j ) = 1 when i = j and P i ( x j ) = 0 when i ≠ j . Since L ( x ) is a sum of ...
  31. [31]
    1.11: Fitting a Polynomial to a Set of Points - Physics LibreTexts
    Mar 5, 2022 · If the tabulated function for which we need an interpolated value is a polynomial of degree less than , the interpolated value will be exact.
  32. [32]
    [PDF] Interpolation: The Divided Difference Method
    An advantage of the Newton form for an interpolating polynomial is that we can add a new data item cheaply. Assume that we have already computed c1, z1, the ...
  33. [33]
    [PDF] Handbook of Mathematical Functions
    The Handbook was prepared under the direction of the late Milton. Abramowitz, and Irene A. Stegun. Its success has depended greatly upon the cooperation of many ...
  34. [34]
    [PDF] FINITE DIFFERENCES AND INTERPOLATION
    Difference tables: An easy way to compute powers of either the forward or backward difference operator is to construct a difference table using a spread sheet.Missing: diagram | Show results with:diagram
  35. [35]
    Backward Difference -- from Wolfram MathWorld
    The backward difference is a finite difference defined by del _p=del f_p=f_p-f_(p-1). Higher order differences are obtained by repeated operations of the ...
  36. [36]
    [PDF] Interpolation and Polynomial Approximation Divided Differences ...
    Sir Isaac Newton to the Rescue: Divided Differences. Zeroth Divided Difference: f [xi ] = f (xi ). First Divided Difference: f [xi ,xi+1] = f [xi+1] − f [xi ].
  37. [37]
    Newton's Forward Difference Formula -- from Wolfram MathWorld
    Newton's forward difference formula is a finite difference identity giving an interpolated value between tabulated points.
  38. [38]
    Newton's Backward Difference Formula -- from Wolfram MathWorld
    Newton's Backward Difference Formula: f_p=f_0+pdel _0+1/(2!)p(p+1)del _0^2+1/(3!)p(p+1)(p+2)del _0^3+..., where del is the backward differenceMissing: polynomial | Show results with:polynomial
  39. [39]
    Gauss interpolation formula - Encyclopedia of Mathematics
    May 13, 2022 · A formula in which the nodes (cf. Node) nearest to the interpolation point x are used as interpolation nodes. If x=x0+th, the formula.
  40. [40]
    Stirling's Finite Difference Formula -- from Wolfram MathWorld
    Stirling's Finite Difference Formula: f_p=f_0+1/2p(delta_(1/2)+delta_(-1/2))+1/2p^2delta_0^2+S_3(delta_(1/2)^2+delta_(-1/2)^2Missing: interpolation | Show results with:interpolation
  41. [41]
    Bessel's Finite Difference Formula -- from Wolfram MathWorld
    An interpolation formula, sometimes known as the Newton-Bessel formula, given by f_p=f_0+pdelta_(1/2)+B_2(delta_0^2+delta_1^2)+B_3delta_(1/2)^3+B_4(delta_0^4+Missing: encyclopedia | Show results with:encyclopedia
  42. [42]
    (PDF) Finite Difference Formulae for Unequal Sub-Intervals Using ...
    Aug 10, 2025 · ... lozenge diagrams can be simplified to ... Finite difference formulae for unequal subintervals derived from Lagrange's interpolation formula.
  43. [43]
    [PDF] Interpolation Atkinson Chapter 3, Stoer & Bulirsch Chapter 2 ...
    It's a lot easier to solve the polynomial interpolation problem using the Lagrange basis: In the mono-.
  44. [44]
    Accuracy and Stability of Numerical Algorithms | 22. Vandermonde ...
    Mar 23, 2012 · Vandermonde matrices play an important role in various problems, such as in polynomial interpolation. Suppose we wish to find the polynomial ...
  45. [45]
    Fast Algorithms for Polynomial Interpolation, Integration, and ...
    In this paper, a group of algorithms is presented for the efficient evaluation of Lagrange polynomial interpolants at multiple points on the line and for the ...
  46. [46]
    [PDF] Vandermonde with Arnoldi - People - University of Oxford
    Vandermonde matrices are exponentially ill-conditioned, rendering the familiar. “polyval(polyfit)” algorithm for polynomial interpolation and least-squares ...
  47. [47]
    [PDF] How (Un)stable Are Vandermonde Systems?
    The fact that real nodes lead to ill-conditioned V andermonde matrices is not surprising if one considers that powers constitute, as is well known, a poor basis ...
  48. [48]
    numerical stability of barycentric Lagrange interpolation
    The barycentric formula is forward stable for points with a small Lebesgue constant, but can be less accurate than modified Lagrange for poor choices of points.
  49. [49]
    [PDF] Fast Algorithms for Polynomial Interpolation, Integration and ... - DTIC
    The algorithms utilize efficient versions of the fast multipole method which have been designed specifically for one-dimensional problems; these are also ...
  50. [50]
    Fast multipole methods for approximating a function from sampling ...
    Feb 10, 2017 · In this paper, we introduce two fast multipole methods to reduce the complexity to \mathcal {O}(\max \left \{N,M\right \}) . The convergence ...<|control11|><|separator|>
  51. [51]
    DLMF: §3.3 Interpolation ‣ Areas ‣ Chapter 3 Numerical Methods
    ... representation of P n ⁡ ( z ) . With an error term the Lagrange interpolation formula for f is given by. 3.3.4, f ⁢ ( z ) = ∑ k = 0 n ℓ k ⁢ ( z ) ⁢ ...
  52. [52]
    [PDF] Lagrange Interpolating polynomial.
    This error formula is timilar to that for the Taylor polynomial: (n+1) f(n+ (= (x)) (x-xo). (n+1)!. +1. The Lagrange polynomial of degree in uses in formation ...
  53. [53]
    [PDF] Approximation Theory --- Chebyshev Polynomials & Least Squares
    [1] Find an optimal placement of the interpolating points. {x0,x1,...,xn} to minimize the error in Lagrange interpola-.
  54. [54]
    [PDF] Numerical Methods
    (Error Bounds for Lagrange Interpolation at Equally Spaced Points). Assume that ... Find an error bound if f(x) = sin x is approximated by an interpolation.
  55. [55]
    [PDF] Errors in Polynomial Interpolation
    1. (n + 1)! f(n+1) (ξx) n i=0. (x −xi). Remark 16.2. Although this formula for the error is somewhat reminiscent of the error term associated with an nth.
  56. [56]
    [PDF] AA215A Lecture 3 Polynomial Interpolation: Numerical ...
    Jan 7, 2016 · 3.6 Newton's Form of the Interpolation Polynomial . ... 3.17 Portraits of Lagrange, Newton and Gauss .Missing: history | Show results with:history
  57. [57]
    [PDF] Unit I: Data Fitting Chapter I.2: Polynomial Interpolation
    Existence and Uniqueness. Let's prove that if the n + 1 interpolation points are distinct, then. Vb = y has a unique solution. We know from linear algebra that ...
  58. [58]
    [PDF] Errors in Polynomial Interpolation
    However, if f is not C∞ and if, for example, the pn are interpolating for f (x) then the most prevalent possibility is actually that {pn} does not converge ...
  59. [59]
    Errors in Interpolation
    So we should avoid high degree polynomials when interpolating evenly spaced points.
  60. [60]
    Lebesgue functions and Lebesgue constants in polynomial ...
    Mar 12, 2016 · The Lebesgue constant is a valuable numerical instrument for linear interpolation because it provides a measure of how close the interpolant of a function is ...
  61. [61]
    [PDF] Two Results on Polynomial Interpolation in Equally Spaced Points
    Theorem 2 asserts that the Lebesgue constants A, for equispaced interpola- tion grow asymptotically at a rate given by lim,! j o. A 'In = 2.
  62. [62]
    Evaluation of Lebesgue Constants - jstor
    Abstract. Asymptotic expressions of the form (2/ r) log n + c + rn are investigated for the Lebesgue constants associated with interpolation at the Chebyshev ...
  63. [63]
    Lebesgue functions and Lebesgue constants - Chebfun
    In fact it is known that as n increases to infinity, the Lebesgue constant for n Chebyshev points is asymptotic to (2/π)log(n) whereas for n equispaced points ...Missing: nodes | Show results with:nodes
  64. [64]
    [PDF] INTERPOLATION OF FUNCTIONS ON CONTINUA
    We will be studying functions defined by their kth modulus of continuity (k e N). ... ERDضS, On some convergence properties of the interpolation polynomials, Ann, ...
  65. [65]
    [PDF] Runge's phenomenon
    Nov 25, 2020 · It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate ...
  66. [66]
  67. [67]
    Barycentric Lagrange Interpolation | SIAM Review
    Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable. It deserves to be known as the standard method of ...Missing: seminal | Show results with:seminal
  68. [68]
    Distance matrices and conditionally positive definite functions
    Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Published: December 1986. Volume 2, pages 11–22, (1986); Cite ...
  69. [69]
    [PDF] Chebfun: A new kind of numerical computing - People
    Chebfuns are Matlab vectors overloaded for smooth or piecewise smooth functions defined on an interval [a,b]. Each piece is represented by a Chebyshev ...
  70. [70]
    The Lebesgue constants for cardinal spline interpolation
    The Lebesgue constants for cardinal spline interpolation☆​​ A formula for ∥ n∥ is obtained, and with it we will show that ‖L n ‖= 2 φ log n+ 2 φ (2 log 4 φ +γ)+0 ...