Fact-checked by Grok 2 weeks ago

Hermite interpolation

Hermite interpolation is a method in for constructing a that matches both the values and specified of a given at a set of distinct points, thereby providing a smoother than standard interpolation techniques when derivative information is available. Named after the French mathematician Charles Hermite, who first posed and solved the problem in 1878, this approach generalizes classical Lagrange interpolation by incorporating osculatory conditions, where the interpolating and its up to certain orders agree with those of the original at the interpolation nodes. For n+1 distinct points with values and first specified, the unique interpolating has degree at most $2n+1, satisfying $2n+2 conditions in total. This method is particularly valuable in applications requiring high accuracy and continuity, such as cubic Hermite splines in and for smooth , numerical integration schemes like Hermite's rule, and simulations of wave propagation in scientific computing. The error term for Hermite interpolation can be expressed using the next higher of the , analogous to the form, enabling precise bounds on quality.

Problem Formulation

Statement of the Problem

Hermite interpolation is a method in numerical analysis for constructing a polynomial that matches both the values and specified derivatives of a given function at a set of distinct points. Given a smooth function f and n distinct interpolation points x_1, x_2, \dots, x_n \in \mathbb{R}, along with non-negative integers k_1, k_2, \dots, k_n specifying the orders of derivatives to match at each point, the goal is to find a polynomial p of minimal degree such that p^{(j)}(x_i) = f^{(j)}(x_i), \quad j = 0, 1, \dots, k_i, \quad i = 1, 2, \dots, n, where p^{(0)}(x_i) := p(x_i) = f(x_i) and p^{(j)} denotes the j-th derivative of p. This imposes a total of m + 1 := \sum_{i=1}^n (k_i + 1) interpolation conditions, so the polynomial p belongs to the space \Pi_m of polynomials of degree at most m = \sum_{i=1}^n (k_i + 1) - 1. The notation for these conditions highlights the Hermite data: at each x_i, the function value f(x_i) and the f'(x_i), \dots, f^{(k_i)}(x_i) are prescribed, allowing for varying derivative orders across points (e.g., k_i = 1 for first only, or higher for greater smoothness). This setup generalizes the classical Lagrange interpolation problem, which corresponds to the special case where all k_i = 0, matching only function values without . By incorporating information, Hermite interpolation produces smoother approximations, particularly useful for applications requiring in both the interpolant and its , such as in spline construction or . In essence, the Hermite interpolant at each point x_i agrees with the local Taylor polynomial of f centered at x_i up to order k_i, but globally combines these local behaviors into a single polynomial of controlled degree.

Uniqueness Theorem

The uniqueness of the Hermite interpolating polynomial is a fundamental result that guarantees the existence of a single polynomial satisfying the specified interpolation conditions. Consider distinct points x_1, \dots, x_s \in \mathbb{R} and nonnegative integers k_1, \dots, k_s such that m = \sum_{i=1}^s (k_i + 1) - 1. For a given smooth function f, there exists a unique polynomial p \in \Pi_m (the space of polynomials of degree at most m) satisfying p^{(j)}(x_i) = f^{(j)}(x_i), \quad 0 \leq j \leq k_i, \quad 1 \leq i \leq s. This follows from a dimension-counting argument combined with the of the associated functionals. The \Pi_m has m+1. The conditions impose exactly m+1 linear constraints on p, given by the total number of specified values and derivatives: \sum_{i=1}^s (k_i + 1) = m + 1. These constraints define a from \Pi_m to \mathbb{R}^{m+1} via the functionals \{\delta_{x_i}^{(j)} \mid 0 \leq j \leq k_i, \, 1 \leq i \leq s \}, where \delta_x^{(j)}(p) = p^{(j)}(x). Since the and have the same finite , it suffices to show that this map is injective (or surjective) to establish bijectivity and thus . To prove injectivity, suppose q \in \Pi_m satisfies the homogeneous conditions q^{(j)}(x_i) = 0 for all specified i, j. Then q has zeros of multiplicity at least k_i + 1 at each x_i, so q(x) = [\omega(x)](/page/Omega_X) r(x) where \omega(x) = \prod_{i=1}^s (x - x_i)^{k_i + 1} has m + 1 and r \in \Pi_m. But \deg q \leq m, which forces r \equiv 0 and hence q \equiv 0. Alternatively, the of the functionals \{\delta_{x_i}^{(j)}\} on \Pi_m ensures that the is trivial: if \sum c_{i,j} \delta_{x_i}^{(j)}(p) = 0 for all p \in \Pi_m, then the coefficients c_{i,j} = 0. This independence holds because the points x_i are distinct and the derivatives up to order k_i form a basis for the under these conditions. From a linear algebra perspective, represent p(x) = \sum_{\ell=0}^m a_\ell x^\ell. The interpolation conditions yield the V \mathbf{a} = \mathbf{b}, where \mathbf{a} = (a_0, \dots, a_m)^T, \mathbf{b} collects the values f^{(j)}(x_i), and V is the (m+1) \times (m+1) confluent with entries involving powers and derivatives evaluated at the x_i. For distinct x_i, this matrix is nonsingular, confirming a unique solution \mathbf{a}.

Construction Methods

Lagrange Basis Method

The Lagrange basis method constructs the Hermite interpolating polynomial using a collection of basis polynomials l_{i,j}(x) tailored to satisfy both function values and specified derivatives at the interpolation points x_1, \dots, x_n, where up to the k_i-th derivative is given at each x_i. These basis polynomials generalize the classical Lagrange basis by enforcing zero conditions with appropriate multiplicities at all points except x_i, while at x_i, the j-th basis satisfies l_{i,j}^{(s)}(x_i) = j! \, \delta_{s j} for s = 0, \dots, k_i, ensuring the correct scaling for the coefficients involving derivatives. The Hermite interpolant p(x) of degree at most \sum_{i=1}^n (k_i + 1) - 1 is expressed explicitly as p(x) = \sum_{i=1}^n \left[ f(x_i) \, l_{i,0}(x) + \sum_{j=1}^{k_i} \frac{f^{(j)}(x_i)}{j!} \, l_{i,j}(x) \right]. This form leverages the linearity of polynomials: each term f(x_i) \, l_{i,0}(x) contributes the value at x_i while vanishing with derivatives at other points, and the derivative terms \frac{f^{(j)}(x_i)}{j!} \, l_{i,j}(x) contribute the j-th at x_i without affecting lower or higher specified derivatives there or any conditions elsewhere. To build the basis, first define the auxiliary function \omega_i(x) = \prod_{\substack{m=1 \\ m \neq i}}^n \frac{(x - x_m)^{k_m + 1}}{(x_i - x_m)^{k_m + 1}}, which satisfies \omega_i(x_i) = 1 and \omega_i^{(s)}(x_m) = 0 for s = 0, \dots, k_m and all m \neq i. This \omega_i(x) encodes the required zero multiplicities at the other interpolation points. The basis polynomials are then constructed starting from the highest order at each x_i and proceeding recursively downward. For the highest derivative, l_{i,k_i}(x) = (x - x_i)^{k_i} \, \omega_i(x), which yields l_{i,k_i}^{(s)}(x_i) = 0 for s < k_i and l_{i,k_i}^{(k_i)}(x_i) = k_i!, as required. For lower orders j = 0, \dots, k_i - 1, l_{i,j}(x) = (x - x_i)^j \, \omega_i(x) - \sum_{s = j+1}^{k_i} \binom{s}{j} \, \omega_i^{(s - j)}(x_i) \, l_{i,s}(x). This recursion subtracts projections onto higher-order bases to enforce l_{i,j}^{(s)}(x_i) = 0 for s \neq j and l_{i,j}^{(j)}(x_i) = j!, while preserving the zero conditions at other points via the factor \omega_i(x). Unrolling the recursion yields an explicit sum form for each l_{i,j}(x) = \omega_i(x) \sum_{s=0}^{k_i - j} c_{j,s} (x - x_i)^s, where the coefficients c_{j,s} depend on powers of (x - x_i) and derivatives of \omega_i evaluated at x_i. The resulting basis polynomials \{ l_{i,j}(x) \} form a Schauder basis for the space of polynomials of degree at most \sum (k_i + 1) - 1, with each satisfying the isolated interpolation conditions at x_i and complete vanishing (up to the specified multiplicities) at all other x_m. This ensures the uniqueness of the Hermite interpolant, as referenced in the , since the basis spans the full solution space and matches all \sum (k_i + 1) conditions. The method, while explicit, can be computationally intensive for high multiplicities due to the need to compute derivatives of \omega_i(x), but it provides a direct product-based formula analogous to the classical approach.

Newton Divided Difference Method

The Newton divided difference method provides an efficient and numerically stable approach to constructing the by extending the classical to handle through confluent . In this framework, the interpolation points x_i are treated with multiplicity m_i = k_i + 1, where k_i is the highest specified at x_i, leading to a total of N+1 = \sum m_i conditions for a of degree at most N. The method leverages a recursive computation of , where repeated evaluations at the same point incorporate higher-order , ensuring the satisfies both function values and at the nodes. For confluent divided differences at a repeated point x_i, the k-th order divided difference is defined as f[x_i, x_i, \dots, x_i] (k+1 times) = \frac{f^{(k)}(x_i)}{k!}, which arises as the limit of the standard divided difference formula when distinct points converge to x_i. This adaptation allows the divided difference table to be built by listing each x_i repeated m_i times in the sequence of nodes z_0, z_1, \dots, z_N, where the z_j follow the order of the points with their multiplicities. The zeroth-order differences are set to f[z_j] = f(z_j) for the initial occurrence in each group, while higher-order confluent differences are directly assigned using the derivative formula when consecutive z_j are equal; otherwise, the standard recursive formula f[z_j, \dots, z_{j+\ell}] = \frac{f[z_{j+1}, \dots, z_{j+\ell}] - f[z_j, \dots, z_{j+\ell-1}]}{z_{j+\ell} - z_j} is applied, with care taken for the indeterminate form when denominators vanish by using the derivative definitions. The divided difference table is constructed iteratively, starting from the zeroth column with function values and derivatives, and filling subsequent diagonal entries (the coefficients) level by level. For instance, in the case of first derivatives at two points (m_0 = m_1 = 2), the node sequence is z_0 = z_1 = x_0, z_2 = z_3 = x_1; the table begins with f[z_0] = f(x_0), f[z_1] = f(x_0), f[z_2] = f(x_1), f[z_3] = f(x_1), then the first-order differences include f[z_0, z_1] = f'(x_0), f[z_2, z_3] = f'(x_1), and mixed differences are computed recursively, yielding coefficients on the main diagonal for the Newton basis. This process scales linearly in the number of conditions for sequential computation, promoting stability over direct methods like Lagrange basis products, especially for higher multiplicities. The resulting Hermite interpolant in Newton form is expressed as p(x) = \sum_{k=0}^{N} a_k \prod_{j=0}^{k-1} (x - z_j), where the coefficients a_k = f[z_0, z_1, \dots, z_k] are the entries from the k-th diagonal of the table, and the product incorporates the repeated nodes to account for multiplicities, ensuring the polynomial matches the specified derivatives. To compute the coefficients algorithmically, initialize the table with the repeated nodes and data: set the zeroth column for function values, then iteratively compute higher-order differences using the recursive formula, substituting derivative-based confluent values whenever consecutive nodes coincide (e.g., for the second-order at a double point, f[z_i, z_i, z_{i+1}] = \frac{f[z_i, z_{i+1}] - f'(z_i)}{z_{i+1} - z_i} if z_{i+1} \neq z_i, but directly f[z_i, z_i, z_i] = f''(z_i)/2 for triples). This yields the coefficients a_k efficiently, with the nested product form allowing hierarchical evaluation and addition of terms for adaptive implementations. When all k_i = 0, the method reduces to the standard Newton divided difference interpolation for distinct points.

Chinese Remainder Theorem Method

The provides an algebraic framework for constructing the Hermite interpolating by decomposing the problem into local solutions modulo pairwise coprime ideals in the R, where R is the base field of real or numbers. The interpolation conditions require the p(x) to match the function f(x) and its derivatives up to order k_i at distinct points x_1, \dots, x_n, which translates to the congruence p(x) \equiv T_i(x) \pmod{(x - x_i)^{k_i + 1}} for each i = 1, \dots, n, where T_i(x) is the Taylor of f at x_i of degree at most k_i. The relevant ideal is I = \prod_{i=1}^n (x - x_i)^{k_i + 1}, and since the factors (x - x_i)^{k_i + 1} are pairwise coprime (their generators share no common roots), the implies that R/I \cong \prod_{i=1}^n R / (x - x_i)^{k_i + 1}. This ensures a unique solution p(x) of degree less than \deg I = \sum_{i=1}^n (k_i + 1) satisfying all local conditions, aligning with the uniqueness theorem for Hermite . To construct p(x), first compute the local Taylor polynomials q_i(x) = T_i(x) for each i, each of degree less than k_i + 1, so that q_i(x_i^{(j)}) = f^{(j)}(x_i) for j = 0, \dots, k_i (where x_i^{(j)} denotes the j-th derivative evaluation). These satisfy q_i(x) \equiv T_i(x) \pmod{(x - x_i)^{k_i + 1}}. Next, for each i, define w_i(x) = \prod_{m \neq i} (x - x_m)^{k_m + 1}, which vanishes to the required multiplicity at all points except x_i, ensuring w_i(x) \equiv 0 \pmod{\prod_{m \neq i} (x - x_m)^{k_m + 1}}. Compute the modular inverse s_i(x) of w_i(x) modulo (x - x_i)^{k_i + 1}, a polynomial of degree less than k_i + 1 such that w_i(x) s_i(x) \equiv 1 \pmod{(x - x_i)^{k_i + 1}}; this inverse exists because w_i(x_i) \neq 0. The global interpolant is then formed by linear combination: p(x) = \sum_{i=1}^n q_i(x) \cdot w_i(x) \cdot s_i(x). Each term q_i(x) w_i(x) s_i(x) satisfies the local condition at x_i (since q_i w_i s_i \equiv T_i \cdot 1 \equiv T_i \pmod{(x - x_i)^{k_i + 1}}) and vanishes at all other points to the required orders (since w_i \equiv 0 there, and s_i is low-degree). The inverses s_i(x) can be found efficiently using the extended Euclidean algorithm adapted to power series truncation modulo (x - x_i)^{k_i + 1}. This method excels in theoretical contexts, as the ring decomposition directly proves and without explicit basis constructions, facilitating in . It is also advantageous computationally when interpolation points are clustered—meaning higher k_i at fewer distinct x_i—since local computations modulo high powers can leverage efficient series expansions or modular lifting techniques, reducing overall complexity compared to basis methods for ill-conditioned cases.

Special Cases

Case with First Derivatives Only

In the case where only function values and first derivatives are specified at each of n distinct interpolation points x_0 < x_1 < \dots < x_{n-1}, the Hermite interpolation problem seeks a unique p(x) of degree at most $2n-1 satisfying p(x_i) = f(x_i) and p'(x_i) = f'(x_i) for i = 0, \dots, n-1. This setup provides $2n conditions, ensuring existence and uniqueness of the interpolant within the space of of degree less than $2n. Such is foundational for constructing smooth approximations, particularly in spline-based methods where of the function and its is required. The Newton divided difference form offers a practical construction for this case, building on the general divided difference method by incorporating repeated points to account for derivatives. Specifically, the divided difference table alternates function values and scaled derivatives: the zeroth-order differences are f[x_i] = f(x_i), the first-order confluent differences at each repeated point are f[x_i, x_i] = f'(x_i), and higher-order differences are computed via the standard recursive formula f[x_j, \dots, x_{j+k}] = \frac{f[x_{j+1}, \dots, x_{j+k}] - f[x_j, \dots, x_{j+k-1}]}{x_{j+k} - x_j} across the points, with confluent values set directly for repeated arguments. This yields the coefficients for the p(x) = a_0 + a_1 (x - z_0) + \dots + a_{2n-1} \prod_{j=0}^{2n-2} (x - z_j), where the z_j is the sequence of nodes with each x_i repeated twice (i.e., z = [x_0, x_0, x_1, x_1, \dots, x_{n-1}, x_{n-1}]) and the a_k are the divided differences f[z_0, \dots, z_k]. For the common subcase of n=2 points x_0 < x_1, the interpolant is a of degree at most 3, often expressed in terms of four explicit basis functions that isolate contributions from each value and . Let h = x_1 - x_0 and t = \frac{x - x_0}{h}. The basis functions are: \begin{align*} h_{0,0}(t) &= (1 + 2t)(1 - t)^2, \\ h_{1,0}(t) &= t^2 (3 - 2t), \\ h_{0,1}(t) &= t (1 - t)^2, \\ h_{1,1}(t) &= t^2 (t - 1). \end{align*} The interpolating polynomial is then p(x) = h_{0,0}(t) f(x_0) + h_{1,0}(t) f(x_1) + h f_{0,1}(t) f'(x_0) + h h_{1,1}(t) f'(x_1), where the factor h scales the derivative basis to match units. These basis functions satisfy h_{i,0}(t_i) = 1 and h_{i,0}(t_j) = 0 for i \neq j (with t_0 = 0, t_1 = 1), and h_{i,0}'(t_k) = 0 for k = 0,1, and similarly for the derivative bases with zero value at both endpoints, zero derivative at the opposite endpoint, and unit derivative at their respective endpoint. When applied piecewise over consecutive intervals with consistent derivative values at shared knots, this construction yields cubic Hermite splines that achieve C^1 , meaning the function and its first are continuous across the points while higher derivatives may discontinue. This property makes the method ideal for applications requiring smooth curves, such as and numerical simulation, without the added complexity of higher derivatives.

Cases with Higher Derivatives

In Hermite interpolation, cases involving higher derivatives extend the conditions beyond function values and first derivatives by specifying derivatives up to order k_i \geq 2 at interpolation points x_i. This requires treating each point with multiplicity \mu_i = k_i + 1 in the divided difference table, where the divided differences for coincident arguments incorporate the higher-order derivatives via the relation f[x_i, x_i, \dots, x_i] (with m+1 repetitions) equals f^{(m)}(x_i)/m!. For instance, when k_i = 2, the triple divided difference satisfies f[x_i, x_i, x_i] = f''(x_i)/2!, and higher multiplicities follow analogously from Taylor expansion limits of the divided difference definition. The resulting interpolating has degree at most \sum_i (k_i + 1) - 1, reflecting the total number of interpolation conditions minus one. For a global polynomial, this ensures exact matching of the specified , but in constructions—such as higher-order splines—the approach allows for C^{\max k_i} across knots, provided the pieces are joined with compatible higher . This elevated is particularly valuable in applications requiring precise or modeling, like trajectory planning. However, incorporating higher k_i introduces challenges, including increased condition numbers due to the of higher to perturbations in data or points, which can amplify numerical instability in computations. Additionally, the higher polynomial degree exacerbates oscillatory behavior, akin to the Runge phenomenon, potentially leading to poor approximation away from the data points. Such higher multiplicities prove useful near singularities, where they enable correction terms to mitigate Gibbs-like oscillations and preserve jump conditions in the function or its derivatives, achieving improved convergence rates like O(h^4) for adapted interpolants. The Newton divided difference form adapts naturally to these cases by constructing the sequence of nodes with repetitions according to each \mu_i, yielding basis polynomials with repeated factors (x - x_i)^{k_i + 1} in the higher-order products \prod_{j=0}^{k-1} (x - z_j), where the z_j include \mu_i copies of x_i. This structure facilitates efficient evaluation and extension from the first-derivative case, though care must be taken in the divided difference table to handle the confluent entries accurately.

Illustrative Examples

Cubic Hermite Interpolation Example

Consider a simple example of cubic Hermite interpolation using two interpolation points x_0 = 0 and x_1 = 1, with specified function values f(0) = 1, f'(0) = 0, f(1) = 2, and f'(1) = 1. This setup requires a cubic p(x) of degree at most 3 that satisfies these four conditions. The polynomial can be constructed using the standard cubic Hermite basis functions, which are derived for the [x_0, x_1] with h = x_1 - x_0 = 1. Let t = (x - x_0)/h = x. The basis functions are: h_{0,0}(t) = 2t^3 - 3t^2 + 1, \quad h_{0,1}(t) = t^3 - 2t^2 + t, h_{1,0}(t) = -2t^3 + 3t^2, \quad h_{1,1}(t) = t^3 - t^2. These satisfy h_{i,j}(x_k) = \delta_{ik} for function values (j=0) and the derivative conditions scaled by h at the endpoints (for j=1). The interpolant is then p(x) = h_{0,0}(x) \cdot 1 + h_{0,1}(x) \cdot 0 + h_{1,0}(x) \cdot 2 + h_{1,1}(x) \cdot 1. Substituting the basis functions yields p(x) = (2x^3 - 3x^2 + 1) + 2(-2x^3 + 3x^2) + (x^3 - x^2) = -x^3 + 2x^2 + 1. Verification confirms p(0) = 1, p'(0) = 0, p(1) = 2, and p'(1) = 1, where p'(x) = -3x^2 + 4x. Alternatively, the same polynomial can be obtained via the Newton , using confluent to incorporate the . For the repeated nodes z_0 = z_1 = 0 and z_2 = z_3 = 1, the divided-difference table is constructed as follows:
Orderz_0 = 0z_1 = 0z_2 = 1z_3 = 1
0f[z_0] = 1f[z_2] = 2
1f[z_0, z_1] = f'(0) = 0f[z_1, z_2] = (2 - 1)/1 = 1f[z_2, z_3] = f'(1) = 1
2f[z_0, z_1, z_2] = (1 - 0)/1 = 1f[z_1, z_2, z_3] = (1 - 1)/1 = 0
3f[z_0, z_1, z_2, z_3] = (0 - 1)/1 = -1
The first-order confluent differences use the directly, and higher orders are computed recursively. The form is p(x) = f[z_0] + f[z_0, z_1](x - z_0) + f[z_0, z_1, z_2](x - z_0)(x - z_1) + f[z_0, z_1, z_2, z_3](x - z_0)(x - z_1)(x - z_2). With z_0 = z_1 = 0 and z_2 = 1, this simplifies to p(x) = 1 + 0 \cdot x + 1 \cdot x \cdot x + (-1) \cdot x \cdot x \cdot (x - 1) = 1 + x^2 - x^2(x - 1) = -x^3 + 2x^2 + 1, matching the basis construction. To illustrate the interpolant, evaluate at the x = 0.5: p(0.5) = -(0.125) + 2(0.25) + 1 = 1.375. This value lies between f(0) and f(1), consistent with the smooth transition enforced by the conditions.

Quintic Hermite Interpolation Example

To illustrate quintic Hermite interpolation, consider the problem of finding a p(x) of degree at most 5 that matches the function values and derivatives up to second order at two points: x_0 = 0 and x_1 = 1, with data f(0) = 1, f'(0) = 0, f''(0) = 2, f(1) = 2, f'(1) = 1, and f''(1) = -1. This setup requires multiplicity 3 at each point in the divided difference table, leading to a unique satisfying the six conditions for C^2 matching. The divided difference method extends naturally to this case by treating the points as repeated (confluent) and using derivative information to fill the : specifically, f[x_i, x_i] = f'(x_i)/1!, f[x_i, x_i, x_i] = f''(x_i)/2!, and higher-order differences via the recursion f[z_0, \dots, z_k] = \frac{f[z_1, \dots, z_k] - f[z_0, \dots, z_{k-1}]}{z_k - z_0}. The points are z_0 = z_1 = z_2 = 0 and z_3 = z_4 = z_5 = 1. The divided difference coefficients for the Newton form are a_0 = f{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 1, a_1 = f[0,0] = 0, a_2 = f[0,0,0] = 1, a_3 = f[0,0,0,1] = 0, a_4 = f[0,0,0,1,1] = -1, and a_5 = f[0,0,0,1,1,1] = 3/2. The interpolating polynomial in Newton form is p(x) = 1 + 0 \cdot x + 1 \cdot x^2 + 0 \cdot x^3 + (-1) \cdot x^3 (x-1) + \frac{3}{2} \cdot x^3 (x-1)^2. Expanding yields the explicit form: p(x) = 1 + x^2 + \frac{5}{2} x^3 - 4 x^4 + \frac{3}{2} x^5. Verification confirms the matches: p(0) = 1, p'(x) = 2x + \frac{15}{2} x^2 - 16 x^3 + \frac{15}{2} x^4 so p'(0) = 0; p''(x) = 2 + 15 x - 48 x^2 + 30 x^3 so p''(0) = 2; and at x=1, p(1) = 2, p'(1) = 1, p''(1) = -1.

Error Analysis

Derivation of the Error Term

The error in arises from the difference between the f and its interpolating p, where p matches f and its up to order k_i at each of n distinct points x_i, for i = 1, \dots, n. Let m = \sum_{i=1}^n (k_i + 1) denote the total number of conditions, so p has degree at most m-1. Assuming f is m times continuously differentiable on an containing x and all x_i, the error term is given by f(x) - p(x) = \frac{f^{(m)}(\xi)}{m!} \prod_{i=1}^n (x - x_i)^{k_i + 1}, for some \xi in the smallest containing x and the x_i. To derive this formula, consider the auxiliary function g(t) = f(t) - p(t) - \frac{f(x) - p(x)}{\omega(x)} \omega(t), where \omega(t) = \prod_{i=1}^n (t - x_i)^{k_i + 1}. By the interpolation conditions, f - p vanishes at each x_i to order at least k_i + 1, so g also vanishes to order at least k_i + 1 at each x_i. Additionally, g(x) = 0 by construction. Thus, g has at least m zeros counting multiplicity (from the points x_i) plus one more at x, for a total of m + 1 zeros counting multiplicity. By the generalized Rolle's theorem, the mth derivative g^{(m)} has at least one zero at some point \xi in the interval. Since p has degree at most m-1, its mth derivative is zero. The mth derivative of \omega(t) is m! times its leading coefficient (assuming \omega is scaled appropriately, but the constant simplifies in the ratio). Differentiating g yields g^{(m)}(t) = f^{(m)}(t) - \frac{f(x) - p(x)}{\omega(x)} \cdot m!. Setting g^{(m)}(\xi) = 0 and solving for the error gives the formula above. This requires f \in C^m on the interval to ensure the derivatives exist and Rolle's theorem applies. In the special case of equal multiplicities, where k_i = k for all i (e.g., matching up to order k at each of n points, so m = n(k + 1)), the error simplifies to f(x) - p(x) = \frac{f^{(m)}(\xi)}{m!} \left[ \prod_{i=1}^n (x - x_i) \right]^{k + 1}. This form resembles the error in standard raised to the power k + 1, highlighting the increased order of contact at the nodes. For the common subcase of k = 1 ( values and first at n + 1 points, indexed from 0 to n), it becomes f(x) - p(x) = \frac{f^{(2n+2)}(\xi)}{(2n+2)!} \prod_{i=0}^n (x - x_i)^2, with p of degree at most $2n + 1 and f \in C^{2n+2}.

Bounds and Implications

The error in Hermite interpolation satisfies the bound |f(x) - p(x)| \leq \frac{\max_{\xi \in I} |f^{(m)}(\xi)|}{m!} \max_{x \in I} |\omega(x)|, where I is the interpolation interval, m = \sum (k_i + 1) is the total number of interpolation conditions, and \omega(x) = \prod_{i=1}^n (x - x_i)^{k_i + 1}. This estimate follows directly from the pointwise error formula, highlighting that the approximation quality depends on the of f (via the higher ) and the of the nodes x_i (via the growth of \omega(x)). To minimize the maximum of |\omega(x)| over I = [-1, 1], the nodes should be chosen as the projected Chebyshev points, which yield \max |\omega(x)| \approx 2^{- \sum (k_i + 1)} in the equispaced multiplicity case with equal k_i. Increasing the multiplicities k_i at specific nodes enhances local accuracy near those x_i by enforcing higher derivative matches, but it amplifies global oscillations away from the nodes due to the higher powers in \omega(x), potentially worsening the uniform error. For large n (number of nodes) or high k_i, the underlying confluent Vandermonde system becomes severely ill-ed, leading to numerical in computing the interpolant coefficients, with condition numbers growing exponentially in m. Compared to Lagrange interpolation, Hermite offers superior local control through derivative constraints, enabling smoother approximations for functions with known tangential behavior, yet it inherits similar global challenges, such as the Runge phenomenon on equispaced nodes over unbounded intervals, where endpoint oscillations persist albeit diminished relative to pure value interpolation. The Lebesgue constant for Hermite interpolation, which bounds the and thus perturbation sensitivity, exhibits logarithmic growth O(\log m) when using , in contrast to on equispaced grids. In practice, Hermite interpolation is most effective for sufficiently smooth functions f \in C^{m}[I], where the derivative bound remains controlled; high-degree interpolants should be avoided to mitigate and , with Chebyshev-distributed nodes recommended to optimize uniformity and computational robustness in applications like spline construction and .

References

  1. [1]
    [PDF] Hermite Interpolation - MATH 375 Numerical Analysis
    Remarks: ▷ The Hermite polynomials H(x) agree with f(x) and the derivatives of the Hermite polynomials H′(x) agree with f′(x). ▷ The degree of the Hermite ...
  2. [2]
    [PDF] A Chronology of Interpolation: From Ancient Astronomy to Modern ...
    In his original 1946 paper [274,275], Schoenberg referred to the ML ... Also in Œuvres de Charles Hermite, vol. III, Gauthier-. Villars, Paris, 1912 ...<|control11|><|separator|>
  3. [3]
    [PDF] Hermite interpolation - Cornell: Computer Science
    Hermite interpolation constructs an interpolant based not only on equations for the function values, but also for the derivatives. For example, consider the ...Missing: numerical | Show results with:numerical
  4. [4]
    [PDF] 1 Cubic Hermite Spline Interpolation - cs.wisc.edu
    To approximate a function f over an interval [a, b], we can split the interval [a, b] into N subintervals using partition points x = (x0,x1,...,xN ) ...<|control11|><|separator|>
  5. [5]
    [PDF] Hermite versus Simpson: the Geometry of Numerical Integration
    Oct 9, 2010 · We derive one approximation to Hermite's Rule whose error term is slightly better than that of Simpson's Rule, and compare the integration ...
  6. [6]
    [PDF] Hermite Methods for the Simulation of Wave Propagation
    The Hermite methods of Goodrich and co-authors combine Hermite interpolation and a staggered (dual) grid to produce high order numerical methods. The ...
  7. [7]
    [PDF] Introduction to Numerical Analysis - UMD MATH
    Jan 4, 2021 · We now consider the following Hermite interpolation problem: The interpolation points are x0,...,xl (which we assume are ordered from small ...
  8. [8]
    [PDF] 4.8. Hermite interpolation. The polynomial interpolation problem we ...
    Hermite interpolation. The polynomial interpolation problem we have discussed so far only involves interpolation of the values of a function f at distinct ...
  9. [9]
    [PDF] Interpolation Atkinson Chapter 3, Stoer & Bulirsch Chapter 2 ...
    1. Polynomial Hermite Interpolation. 1 The Hermite interpolation problem specifies n nodes (note difference compared to n + 1 nodes in previous section!) {xi} ...
  10. [10]
    [PDF] Hermite Interpolation - UCSD Math
    Hermite Interpolation. We express the Hermite interpolation as a linear system of equations. Lemma. The Hermite interpolation problem has got a unique solution.<|control11|><|separator|>
  11. [11]
    [PDF] On Hermite interpolation and divided differences
    This paper is a survey of topics related to Hermite interpolation. In the first part we present the standard analysis of the Hermite interpolation problem.
  12. [12]
    [PDF] MATH 745 – Review of Interpolation and Approximation Theory
    The interpolation problem has a unique solution. 2. The functionals L1,...,Ln are linearly independent over Vn. 3. The only element un ∈ Vn satisfying Liun ...Missing: evaluation | Show results with:evaluation<|control11|><|separator|>
  13. [13]
    [PDF] Divided differences - cs.wisc.edu
    Jun 21, 2021 · This form provides an efficient representation of Hermite inter- polants. The Newton form of p ∈ Π with respect to the sequence t = (t1,t2,...) ...
  14. [14]
    [PDF] Newton Divided Differences - UC Berkeley math
    Feb 17, 2015 · Newton Divided Differences for Hermite Interpolation. Page 9. Evaluate Hermite Polynomial given coefficients. Page 10. Hermite Interpolation on ...Missing: method | Show results with:method
  15. [15]
    [PDF] MATH 4513 Numerical Analysis - Department of Mathematics
    Sep 15, 2020 · P(xi) = f(xi). P0(xi) = f0(xi) i = 0, 1, ··· , n. This is called Hermite interpolation. Xu Zhang (Oklahoma State University). MATH 4513 ...
  16. [16]
    Hermite Interpolation
    The Hermite interpolation is carried out to the same function $y=f(x)=x\,\sin(2x+\pi/4)+1$ used in previous examples, with the result shown in the figure below.Missing: numerical analysis
  17. [17]
    [PDF] 3.4 Hermite Interpolation
    Consider to interpolate tanh(x) using Lagrange polynomial and nodes x(. = −1.5, x/. = 0, x1. = 1.5. 2. Now interpolate tanh(x) using nodes x(. = −1.5, x ...
  18. [18]
    [PDF] Introduction to Numerical Analysis - UMD MATH
    Jan 31, 2017 · The error in the Hermite interpolation (3.50) is given by the following theorem. Theorem 3.20 Let x0,...,xn be distinct nodes in [a, b] and ...<|control11|><|separator|>
  19. [19]
    Adapting Cubic Hermite Splines to the Presence of Singularities ...
    May 3, 2023 · Hermite interpolation is classically used to reconstruct smooth data when the function and its first order derivatives are available at ...
  20. [20]
    [PDF] 4.2 Hermite Interpolation
    interpolation conditions are given at xi with 1 ≤ i ≤ n: p(j) (xi ) = cij ... Use the extended Newton Divided Difference Method. 1. Find a quadratic ...
  21. [21]
    [PDF] Lecture Notes on Numerical Analysis
    6.2.4 Error Analysis for Hermite Interpolation. The interpolation error which is incurred by Hermite interpolation can be estimated in the same fashion as ...
  22. [22]
    [PDF] Numerical Analysis, 9th ed.
    ... Numerical Analysis,. Ninth Edition. Richard L. Burden and J. Douglas Faires. Editor-in-Chief: Michelle Julet. Publisher: Richard Stratton. Senior Sponsoring ...
  23. [23]
    [PDF] AN INTRODUCTION TO NUMERICAL ANALYSIS Second Edition ...
    Mar 2, 2012 · ... Errors in Data and Forward Differences. 151. 3.5 Further Results on Interpolation Error. 154. 3.6 Hermite Interpolation. 159. 3.7 Piecewise ...
  24. [24]
    Optimal error bounds for Hermite interpolation - ScienceDirect
    1. G Birkhoff, A Priver. Hermite interpolation errors for derivatives. J. Math. Phys., 46 (1967), pp. 440-447. Crossref ...
  25. [25]
  26. [26]
    Hermite function interpolation on a finite uniform grid
    The Runge Phenomenon is not completely abolished, but is greatly diminished. Finite interval Hermite interpolation is ill-conditioned and therefore limited to a ...
  27. [27]
    Lebesgue constants for Hermite and Fejér interpolation on ...
    We determine the asymptotic behaviour of the supremum norm of the nth Hermite (Fejér) interpolation operator in n+1 equispaced points. Both these norm.