Trigonometric interpolation
Trigonometric interpolation is a numerical method for approximating a function by a trigonometric polynomial—a finite linear combination of sines and cosines—that exactly matches the function's values at a set of equally spaced points on an interval, often with periodic boundary conditions to suit periodic data. This approach is particularly effective for interpolating periodic functions, where it leverages the Discrete Fourier Transform (DFT) to compute coefficients efficiently, decomposing the data into its sinusoidal components via the formula X_k = \frac{1}{N} \sum_{n=0}^{N-1} x_n e^{-i k 2\pi n / N} for k = 0, 1, \dots, N-1, and reconstructing the interpolant as p(t) = \sum_{k=0}^{N-1} X_k e^{i k t} (or its real part for real-valued functions).[1]
Unlike polynomial interpolation, which can suffer from the Runge phenomenon—large oscillations near the endpoints for high-degree polynomials on non-periodic intervals—trigonometric interpolation maintains stability for equally spaced points due to its inherent periodicity and equivalence to polynomial interpolation in the complex plane via the substitution z = e^{i x}.[2] The Fast Fourier Transform (FFT) algorithm further enhances its computational efficiency, reducing complexity from O(N^2) to O(N \log N), making it ideal for large datasets in signal processing and Fourier analysis.[3]
Key applications include numerical simulations and optics, where it approximates parametric curves and periodic phenomena, such as wave propagations, by ensuring exact reproduction at sample points while converging rapidly for smooth functions.[1]
Introduction
Definition and motivation
Trigonometric interpolation is a technique in numerical analysis for approximating a given periodic function f(x) by determining a trigonometric polynomial of degree at most n that passes exactly through a set of $2n+1 or $2n specified points.[4] Such a polynomial takes the form
t_n(x) = \frac{a_0}{2} + \sum_{k=1}^n \left( a_k \cos(kx) + b_k \sin(kx) \right),
or equivalently in complex form,
t_n(x) = \sum_{k=-n}^n c_k e^{i k x},
where the coefficients a_k, b_k, or c_k are chosen to satisfy the interpolation conditions t_n(x_j) = f(x_j) at the points x_j.[2] This approach is particularly suited to functions defined on a periodic interval, such as [-\pi, \pi) or [0, 2\pi), where the basis functions \cos(kx) and \sin(kx) inherently possess the desired periodicity of period $2\pi.[5]
The primary motivation for trigonometric interpolation arises when dealing with periodic data, as standard polynomial interpolation on a finite interval often introduces artifacts like the Runge phenomenon, where the approximant oscillates wildly near the endpoints due to the non-periodic nature of polynomial basis functions.[2] In contrast, trigonometric interpolation employs a basis that matches the periodicity of the target function, thereby avoiding such boundary issues and ensuring the interpolant remains well-behaved over the entire real line by periodic extension.[4] This makes it especially valuable in applications involving signals, Fourier analysis, and periodic phenomena in physics and engineering, where preserving the function's cyclic structure is essential for accurate representation and avoiding artificial discontinuities.[6]
A simple illustration of trigonometric interpolation involves approximating the periodic function f(x) = \sin(x) at equally spaced points x_j = \frac{2\pi j}{m} for j = 0, 1, \dots, m-1 on [0, 2\pi), where m = 2n+1 with n \geq 1. Since \sin(x) is itself a trigonometric polynomial of degree 1, the interpolant t_n(x) recovers f(x) exactly for sufficiently large n, demonstrating the method's ability to faithfully reproduce smooth periodic functions without introducing extraneous oscillations.[2] In basic cases with smooth interpolants, this approach maintains periodicity while sidestepping the Gibbs phenomenon that can plague Fourier series approximations of discontinuous functions.[4]
Historical development
Trigonometric interpolation traces its origins to the early 19th century, closely intertwined with the development of Fourier series for representing periodic functions. In 1822, Joseph Fourier introduced the concept of expanding periodic functions as infinite sums of sines and cosines in his work Théorie analytique de la chaleur, laying the groundwork for trigonometric approximations that interpolate function values at discrete points.[7] The partial sums of these series naturally provide trigonometric polynomials that interpolate periodic data at equidistant points, marking the initial connection between Fourier analysis and interpolation techniques.[8]
In the early 19th century, the general problem of trigonometric interpolation had been formalized, with Friedrich Bessel providing the first complete account in 1815, focusing on interpolating periodic data using finite trigonometric sums for astronomical applications.[9] This built on earlier contributions from Leonhard Euler and Joseph-Louis Lagrange in the 18th century, who developed finite trigonometric series for planetary orbit interpolation, though explicit formulas for equidistant cases emerged more systematically in interpolation theory texts around the 1910s, as numerical analysis advanced.[10] In the early 20th century, the field formalized alongside broader numerical methods, with key practical algorithms emerging in the 1930s; notably, Cornelius Lanczos's 1938 paper outlined efficient trigonometric interpolation for empirical and analytical functions, emphasizing convergence and applications in physics like X-ray analysis.[11]
A major milestone occurred in 1965 with the publication of the Cooley-Tukey algorithm, which enabled fast computation of the discrete Fourier transform (DFT), directly facilitating efficient evaluation of trigonometric interpolants at equidistant nodes and integrating interpolation with digital signal processing. This built on precursors like Carl Friedrich Gauss's 1805 unpublished work on trigonometric interpolation for least-squares orbit fitting, later recognized as an early fast Fourier transform variant.[8] Post-2000 advances have extended trigonometric interpolation to non-equidistant points, leveraging algorithms like the non-equispaced fast Fourier transform (NFFT) for improved flexibility in applications such as image processing and irregular data fitting.
Mathematical Foundations
Trigonometric polynomials
A trigonometric polynomial of degree at most n is a function expressible as
T(x) = a_0 + \sum_{k=1}^n \left( a_k \cos(kx) + b_k \sin(kx) \right),
where the coefficients a_k, b_k are real or complex numbers.[12] This representation spans the finite-dimensional vector space generated by the basis functions \{ 1, \cos x, \sin x, \dots, \cos(nx), \sin(nx) \}.[12] Equivalently, in complex form, it can be written as
T(x) = \sum_{k=-n}^n c_k e^{i k x},
where the coefficients c_k satisfy c_{-k} = \overline{c_k} for real-valued T.[12]
Trigonometric polynomials are inherently periodic with period $2\pi, as each basis function satisfies f(x + 2\pi) = f(x).[12] The space of all such polynomials of degree at most n forms a vector space of dimension $2n + 1.[12] The basis functions are orthogonal with respect to the L^2 inner product over [-\pi, \pi], given by
\langle f, g \rangle = \int_{-\pi}^{\pi} f(x) \overline{g(x)} \, dx.
Specifically,
\int_{-\pi}^{\pi} \cos(mx) \cos(nx) \, dx = \begin{cases}
0 & \text{if } m \neq n, \\
\pi & \text{if } m = n \geq 1, \\
2\pi & \text{if } m = n = 0,
\end{cases}
\int_{-\pi}^{\pi} \sin(mx) \sin(nx) \, dx = \begin{cases}
0 & \text{if } m \neq n, \\
\pi & \text{if } m = n \geq 1,
\end{cases}
and all cross integrals between distinct basis functions vanish.[13]
For any set of $2n+1 distinct points x_j in [0, 2\pi) and arbitrary values y_j, there exists a unique trigonometric polynomial T of degree at most n such that T(x_j) = y_j for j = 0, \dots, 2n.[12] This uniqueness follows from the dimension of the space matching the number of interpolation conditions and the linear independence of the basis evaluated at distinct points.[12]
As an illustrative example, consider interpolating the values 1 at x = 0 and -1 at x = \pi using a trigonometric polynomial of degree at most 1. One such polynomial is T(x) = \cos x, which satisfies T(0) = 1 and T(\pi) = -1.[12] Note that uniqueness requires three points for degree 1, so multiple polynomials of degree at most 1 can fit two points.
Trigonometric interpolation employs a basis consisting of trigonometric functions, such as sines and cosines, or equivalently complex exponentials e^{ikx} for integer k, in contrast to classical polynomial interpolation, which uses algebraic polynomials formed by powers of the variable x.[2] This fundamental difference in basis makes trigonometric interpolation particularly suited for approximating periodic functions on intervals like [0, 2\pi], where the inherent periodicity of the basis avoids artificial discontinuities at the boundaries, whereas polynomial interpolation is more general-purpose but can introduce such issues for periodic data.[3]
In terms of applicability, trigonometric interpolation at equidistant nodes excels for periodic functions by maintaining stability and avoiding the severe oscillations known as the Runge phenomenon that plague high-degree polynomial interpolation on equispaced points in non-periodic settings.[14] The Runge phenomenon arises in polynomial interpolation due to the clustering of Lebesgue function peaks near the endpoints, leading to exponential error growth, but trigonometric interpolation circumvents this for periodic data by leveraging the uniform distribution on the circle.[15]
Regarding degree equivalence, a trigonometric polynomial of degree n, spanning a space of dimension $2n+1, corresponds in complexity to an algebraic polynomial of degree up to $2n, yet it exhibits superior stability when adapted to circular or periodic domains.[3] Specifically, on the unit circle, trigonometric interpolation is equivalent to polynomial interpolation in the complex variable z = e^{i\theta}, where the trigonometric functions map to Laurent polynomials restricted to the unit circle, providing a direct algebraic correspondence without the instability issues of endpoint-focused polynomial methods.[3]
General interpolation problem
The general trigonometric interpolation problem seeks to approximate a periodic function by finding a trigonometric polynomial that exactly matches prescribed values at specified points within one period. Given distinct points x_j \in [0, 2\pi) for j = 0, \dots, m-1, and corresponding function values f_j, the goal is to determine a trigonometric polynomial T of degree at most n such that T(x_j) = f_j for each j, where the number of points satisfies m = 2n + 1 (odd case) or m = 2n (even case).[16][17]
In the real-variable formulation, the interpolating polynomial takes the form
T(x) = \frac{a_0}{2} + \sum_{k=1}^n \left( a_k \cos(kx) + b_k \sin(kx) \right),
which spans a vector space of dimension $2n + 1. The unknown coefficients a_0, a_k, b_k are obtained by solving the linear system
T(x_j) = f_j, \quad j = 0, \dots, m-1,
formed by evaluating the polynomial at the interpolation points; this yields a structured matrix analogous to the Vandermonde matrix for polynomial interpolation.[16][18]
Existence and uniqueness of the solution are guaranteed when m = 2n + 1, as the associated linear system is square and the matrix is invertible for distinct points due to the linear independence of the trigonometric basis functions. In the even case m = 2n, the system is underdetermined relative to the full space dimension, necessitating a slight modification—such as restricting the polynomial space to dimension $2n by omitting one basis function (e.g., the highest-frequency sine term)—to ensure a unique interpolant.[16][17]
The interpolant can also be expressed in a Lagrange-like basis as T(x) = \sum_{j=0}^{m-1} f_j \ell_j(x), where each \ell_j(x) is a trigonometric polynomial satisfying \ell_j(x_i) = \delta_{ij} (Kronecker delta); for general points, these basis functions are constructed via the product of shifted Dirichlet kernels or equivalent mechanisms, though explicit forms are more readily available for equidistant nodes.[16][18]
Trigonometric interpolation can be reformulated in the complex domain by expressing the interpolating trigonometric polynomial as a finite sum of complex exponentials. For a set of 2n+1 distinct points x_j \in [0, 2\pi), j = 0, \dots, 2n, the interpolant T(x) takes the form
T(x) = \sum_{k=-n}^{n} c_k e^{i k x},
where the complex coefficients c_k \in \mathbb{C} are chosen to satisfy the interpolation conditions T(x_j) = f(x_j) for a given function f.[4]
This leads to the system of equations
\sum_{k=-n}^{n} c_k e^{i k x_j} = f(x_j), \quad j = 0, \dots, 2n.
In matrix form, this is V \mathbf{c} = \mathbf{f}, where \mathbf{c} = (c_{-n}, \dots, c_n)^T and \mathbf{f} = (f(x_0), \dots, f(x_{2n}))^T are vectors of length 2n+1, and V is the (2n+1) \times (2n+1) Vandermonde-like matrix with entries V_{j k} = e^{i k x_j} for row indices j = 0, \dots, 2n and column indices k = -n, \dots, n. The matrix V is invertible provided the points x_j are distinct, ensuring a unique solution for the coefficients \mathbf{c} = V^{-1} \mathbf{f}.[4]
This complex formulation relates directly to the real trigonometric form T(x) = \frac{a_0}{2} + \sum_{k=1}^n (a_k \cos(k x) + b_k \sin(k x)) via Euler's formula, e^{i k x} = \cos(k x) + i \sin(k x). Specifically, the coefficients satisfy c_0 = \frac{a_0}{2}, c_k = \frac{a_k - i b_k}{2} for k = 1, \dots, n, and c_{-k} = \frac{a_k + i b_k}{2} for k = 1, \dots, n, with the reality of T(x) implying \overline{c_{-k}} = c_k. This mapping preserves the equivalence between the two representations while facilitating algebraic manipulation in the complex plane.[19]
The complex exponential basis offers computational advantages, particularly for equidistant nodes, where the system can be solved efficiently using the fast Fourier transform, converting the interpolation into a discrete Fourier analysis problem.[4]
General Solutions
Solutions for odd number of interpolation points
When the number of interpolation points m = 2n + 1 is odd, the trigonometric interpolant is a trigonometric polynomial of degree at most n, expressed in complex exponential form as
p(x) = \sum_{k=-n}^{n} c_k e^{i k x},
where the coefficients c_k are determined to satisfy p(x_j) = f_j for j = 0, \dots, m-1. For general nodes x_j, the coefficients solve a Vandermonde-like linear system. For equidistant nodes where the basis functions exhibit discrete orthogonality, the coefficients take the explicit DFT-like form
c_k = \frac{1}{m} \sum_{j=0}^{m-1} f_j e^{-i k x_j}, \quad k = -n, \dots, n.[2]
This formulation arises from the complex exponential basis introduced in the previous section on problem formulation.[2]
To recover the real-valued trigonometric form p(x) = a_0 + \sum_{k=1}^n (a_k \cos(k x) + b_k \sin(k x)), the coefficients are obtained from the complex ones assuming p(x) is real, so c_{-k} = \overline{c_k}. Specifically,
a_0 = c_0, \quad a_k = 2 \operatorname{Re}(c_k), \quad b_k = -2 \operatorname{Im}(c_k), \quad k = 1, \dots, n.[2]
Consider the example with n = 1, so m = 3 points at equidistant nodes x_0 = 0, x_1 = 2\pi/3, x_2 = 4\pi/3, and values f_0 = 1, f_1 = 0, f_2 = 0. The coefficients are
c_k = \frac{1}{3} \sum_{j=0}^{2} f_j e^{-i k x_j} = \frac{1}{3} \left( 1 \cdot e^{0} + 0 + 0 \right) = \frac{1}{3}, \quad k = -1, 0, 1,
since the other terms vanish. Thus, c_{-1} = c_0 = c_1 = 1/3. The real coefficients are a_0 = 1/3, a_1 = 2 \operatorname{Re}(1/3) = 2/3, b_1 = -2 \operatorname{Im}(1/3) = 0, yielding
p(x) = \frac{1}{3} + \frac{2}{3} \cos x,
which verifies p(0) = 1, p(2\pi/3) = 1/3 - 1/3 = 0, p(4\pi/3) = 1/3 - 1/3 = 0.[2]
The uniqueness of this interpolant follows from the dimension of the space of trigonometric polynomials of degree at most n being $2n + 1 = m, matching the number of points, and the linear independence of the functions \{ e^{i k x} \}_{k=-n}^n at distinct points x_j modulo $2\pi, ensuring the evaluation matrix is invertible.[20]
Solutions for even number of interpolation points
When the number of interpolation points m = 2n is even, the space of trigonometric polynomials of degree at most n has dimension $2n + 1, exceeding the number of conditions by one and rendering the interpolation problem underdetermined for general nodes. To resolve this and ensure uniqueness, the solution space is restricted to a subspace of dimension $2n, such as trigonometric polynomials of degree at most n-1 augmented by a single additional term at frequency n, typically a cosine-like term d \cos(n x + \phi). A common choice is the "balanced" case, setting the sine coefficient at degree n to zero (b_n = 0). For general nodes, coefficients are found by solving the corresponding linear system.[21]
In the complex exponential formulation, the interpolant is t(x) = \sum_{k=-n}^{n} c_k e^{i k x}, subject to the constraint c_{-n} = c_n (balanced case) to incorporate a cosine-like highest-frequency term, excluding the pure sine component at degree n. The coefficients c_k for k = -n+1, \dots, n-1 and the shared c_n = c_{-n} are obtained by solving the linear system \sum_{k=-n}^{n} c_k e^{i k x_j} = f(x_j) for j = 0, \dots, 2n-1. For equidistant nodes, this reduces to the discrete Fourier transform formula c_k = \frac{1}{m} \sum_{j=0}^{m-1} f(x_j) e^{-i k x_j} for |k| < n, with special handling for the Nyquist frequency k = \pm n, where c_n = c_{-n} = \frac{1}{m} \sum_{j=0}^{m-1} f(x_j) (-1)^j (real-valued). Note that for equidistant nodes, \sin(n x) \equiv 0 at the nodes, so it is naturally excluded from the interpolation space.[21][22]
The corresponding real-valued expression is t(x) = \frac{a_0}{2} + \sum_{k=1}^{n-1} \left( a_k \cos(k x) + b_k \sin(k x) \right) + a_n \cos(n x), where the coefficients a_k, b_k for lower degrees are the standard Fourier coefficients, and a_n (with b_n = 0) resolves the additional freedom in the balanced case. This form emphasizes the cosine augmentation for the highest term, promoting stability in the interpolant.[21]
For a concrete example with four points (m=4, n=2) at equispaced nodes x_0 = 0, x_1 = \pi/2, x_2 = \pi, x_3 = 3\pi/2 with values f_0, f_1, f_2, f_3, the balanced interpolant is t(x) = a_0 + a_1 \cos x + b_1 \sin x + a_2 \cos(2x), excluding \sin(2x) since it vanishes at the nodes. The coefficients are computed by solving the 4×4 linear system from the basis functions evaluated at the nodes, yielding explicit formulas:
a_0 = \frac{1}{4} (f_0 + f_1 + f_2 + f_3), \quad a_1 = \frac{f_0 - f_2}{2}, \quad b_1 = \frac{f_1 - f_3}{2}, \quad a_2 = \frac{f_0 + f_2 - f_1 - f_3}{4}.
For instance, these ensure interpolation for general data, such as f = [0, 1, 0, 0], where b_1 = 1/2 is nonzero.[22]
Equidistant Nodes
For an odd number of equidistant interpolation points m = 2n + 1, the nodes are given by x_j = \frac{2\pi j}{m} for j = 0, 1, \dots, m-1. The unique trigonometric polynomial T(x) of degree at most n that interpolates given values f_j = f(x_j) at these points can be expressed in closed form using the Dirichlet kernel.[23]
The kernel-based formula is
T(x) = \sum_{j=0}^{m-1} f_j \, D_n(x - x_j),
where the normalized Dirichlet kernel is
D_n(\theta) = \frac{\sin\left( \left(n + \frac{1}{2}\right) \theta \right)}{m \sin\left( \frac{\theta}{2} \right)}.
This expression follows from the fact that the Lagrange basis functions for trigonometric interpolation at these equidistant nodes are scaled shifts of the Dirichlet kernel, ensuring T(x_j) = f_j and T(x) is a trigonometric polynomial of the required degree.[23]
Equivalently, the interpolant admits an explicit Fourier series representation
T(x) = \sum_{k=-n}^{n} c_k \, e^{i k x},
with coefficients obtained via the discrete Fourier transform:
c_k = \frac{1}{m} \sum_{j=0}^{m-1} f_j \, e^{-i k x_j} = \frac{1}{m} \sum_{j=0}^{m-1} f_j \, \omega^{-k j}, \quad \omega = e^{i 2\pi / m}.
This form directly computes the trigonometric coefficients from the data, leveraging the periodicity and equidistribution of the nodes.
To illustrate, consider the Runge-like test function f(x) = \frac{1}{1 + 25 \sin^2 x} interpolated at m = 5 (n = 2) equidistant points on [0, 2\pi). The values f_j are evaluated at the nodes, the DFT coefficients c_k for k = -2, \dots, 2 are computed using the formula above, and T(x) is assembled as the corresponding trigonometric polynomial, demonstrating the method's ability to approximate periodic functions with low-degree terms while respecting the interpolation conditions.[24]
For an even number of equidistant interpolation points m = 2n, the nodes are defined as x_j = \frac{2\pi j}{m} for j = 0, 1, \dots, m-1. The space of trigonometric polynomials that can interpolate arbitrary data at these points consists of functions of the form T(x) = \sum_{k=0}^{n} a_k \cos(k x) + \sum_{k=1}^{n-1} b_k \sin(k x), where the \sin(n x) term is excluded because it vanishes at all x_j. This space has dimension $2n, matching the number of points, ensuring uniqueness.[25]
The coefficients in the Fourier form are computed similarly to the discrete Fourier transform, but with special handling for the Nyquist frequency k = n. For |k| < n, the complex coefficients are c_k = \frac{1}{m} \sum_{j=0}^{m-1} f(x_j) e^{-i k x_j}, and T(x) = \sum_{k=-n+1}^{n-1} c_k e^{i k x}. For the Nyquist term, c_n = \frac{1}{m} \sum_{j=0}^{m-1} f(x_j) (-1)^j, since e^{i n x_j} = (-1)^j. The corresponding term is then c_n \cos(n x), using the real part of the complex exponential sum to ensure the interpolant is real-valued and correctly reproduces the data.[26]
As an example, consider four-point interpolation (m=4, n=2) of a step function f(x) = -1 for $0 \leq x < \pi and f(x) = 1 for \pi \leq x < 2\pi, with values f(0) = -1, f(\pi/2) = -1, f(\pi) = 1, f(3\pi/2) = 1. The Nyquist coefficient is c_2 = \frac{1}{4} [ (-1)(1) + (-1)(-1) + (1)(1) + (1)(-1) ] = \frac{1}{4} (-1 + 1 + 1 - 1) = 0, yielding no Nyquist term. The lower-frequency coefficients are c_1 = \frac{1}{4} \sum f(x_j) e^{-i x_j} = - \frac{1}{2} i (yielding a_1 = -1, b_1 = -1), c_{-1} = \frac{1}{2} i, and c_0 = 0. Thus, T(x) = -\cos x - \sin x, which matches the data points and demonstrates how the even-point formulation uses lower frequencies to preserve the function's behavior without the Nyquist contribution in this symmetric case.[25]
Numerical implementation
The direct method for computing trigonometric interpolants involves formulating the interpolation problem as a linear system \mathbf{A} \mathbf{c} = \mathbf{f}, where \mathbf{A} is the matrix with entries from the trigonometric basis functions (e.g., \cos(k x_j) and \sin(k x_j)) evaluated at the nodes x_j, \mathbf{c} contains the coefficients, and \mathbf{f} are the data values. For general nodes, this system is solved using Gaussian elimination, which requires O(m^3) operations for m interpolation points, though the matrix is well-conditioned for the complex exponential basis with condition number 1.[2][27]
For equidistant nodes, an efficient alternative leverages the discrete Fourier transform (DFT) to obtain the coefficients directly from the data. The complex coefficients c_k are given by the DFT of the values f_j, computed as c_k = \frac{1}{m} \sum_{j=0}^{m-1} f_j e^{-2\pi i k j / m} for k = 0, \dots, m-1, and this can be evaluated in O(m \log m) time using the fast Fourier transform (FFT).[2] To obtain real trigonometric coefficients, the imaginary unit roots are separated: the constant term is a_0 = \operatorname{Re}(c_0), while for k = 1, \dots, \lfloor m/2 \rfloor - 1, a_k = 2 \operatorname{Re}(c_k) and b_k = -2 \operatorname{Im}(c_k), with adjustments for the highest frequency if m is even.[28]
Once coefficients are obtained, the interpolant is evaluated as p(x) = a_0 + \sum_{k=1}^{\lfloor m/2 \rfloor - 1} (a_k \cos(kx) + b_k \sin(kx)) + a_{m/2} \cos((m/2) x) if m even. Direct summation works for small m, but for stability and efficiency, especially at nodes near the data points, the barycentric form is preferred, expressing p(x) = \frac{\sum_{j=0}^{m-1} \frac{w_j f_j}{\sin((x - x_j)/2)} }{\sum_{j=0}^{m-1} \frac{w_j}{\sin((x - x_j)/2)} } with precomputed weights w_j = (-1)^{j} \sin((x_j - x_0)/2) for the second-kind form, avoiding ill-conditioned Lagrange bases.[29]
In practice, languages like MATLAB and Julia provide built-in FFT functions for direct implementation. For example, in MATLAB, coefficients are computed as z = fft(f)/m;, followed by extraction of real and imaginary parts for a_k and b_k. Similarly, in Julia using the FFTW package, c = fft(y)/N yields the coefficients for equidistant nodes over [0, 2\pi). Pseudocode for real coefficients via FFT (assuming even m, zero-padded if needed) is:
function real_trig_coeffs(y, m):
z = fft(y) / m # complex DFT coefficients
a0 = real(z[0])
a = 2 * real(z[1 : m//2])
b = -2 * imag(z[1 : m//2])
a[m//2] = real(z[m//2]) # Nyquist cosine term
return a0, a, b
function real_trig_coeffs(y, m):
z = fft(y) / m # complex DFT coefficients
a0 = real(z[0])
a = 2 * real(z[1 : m//2])
b = -2 * imag(z[1 : m//2])
a[m//2] = real(z[m//2]) # Nyquist cosine term
return a0, a, b
This approach ensures O(m \log m) overall complexity for coefficient computation and O(m) per evaluation point via direct sum or barycentric weights.[28][16]
Trigonometric interpolation at equidistant nodes on the unit circle or interval [0, 2\pi) exhibits a profound equivalence to the discrete Fourier transform (DFT). Specifically, for N equally spaced points x_j = 2\pi j / N, j = 0, \dots, N-1, the unique trigonometric polynomial of degree at most (N-1)/2 that interpolates given data values f(x_j) has coefficients that are precisely the DFT of the data sequence \{f(x_j)\}.[2][30]
The DFT coefficients c_k for the interpolant are computed as
c_k = \frac{1}{N} \sum_{j=0}^{N-1} f(x_j) e^{-i k x_j}, \quad k = -(N-1)/2, \dots, (N-1)/2,
where the range of k adjusts for odd N, ensuring the basis functions e^{i k x} are orthogonal over the discrete points. This formulation arises because the interpolation matrix, a discrete Fourier matrix, is unitary up to scaling, directly yielding the coefficients without solving a general linear system.[2][30]
The interpolating trigonometric polynomial T(x) is then evaluated as
T(x) = \sum_{k=-(N-1)/2}^{(N-1)/2} c_k e^{i k x},
which corresponds to the inverse DFT (IDFT) formula when evaluated at the original nodes, reproducing f(x_j) exactly due to the completeness of the basis on those points. This representation positions trigonometric interpolation as a discrete analog of Fourier series approximation.[2][30]
Several key properties follow from this connection. Parseval's theorem holds in the discrete setting, stating that the sum of squared data values equals N times the sum of squared coefficients: \sum |f(x_j)|^2 = N \sum |c_k|^2, reflecting energy preservation in the transform. Moreover, the interpolant T(x) serves as the partial Fourier series of degree (N-1)/2 evaluated at the nodes, providing a band-limited approximation to periodic functions.[2][30]
This DFT-based approach circumvents direct matrix inversion for coefficient computation, which would otherwise require O(N^2) operations. Instead, the fast Fourier transform (FFT), introduced by Cooley and Tukey in 1965, enables evaluation in O(N \log N) time, revolutionizing practical implementations of trigonometric interpolation.[2][30]
Error Analysis
Interpolation error estimates
The error in trigonometric interpolation arises from the difference between a periodic function f and its interpolant T_n f, a trigonometric polynomial of degree at most n. For interpolation at $2n+1 equidistant nodes in [0, 2\pi), the maximum error is bounded using the Lebesgue constant \Lambda_n = \max_x \sum_{j=0}^{2n} |l_j(x)|, where l_j(x) are the Lagrange basis functions adapted to trigonometric interpolation. This constant measures the worst-case amplification of the best approximation error and grows logarithmically as \Lambda_n \sim \frac{2}{\pi} \log n + C for some constant C > 0.[31]
The general uniform error bound for the interpolant is \|f - T_n f\|_\infty \leq (1 + \Lambda_n) E_n(f), where E_n(f) = \inf_{P \in \mathcal{T}_n} \|f - P\|_\infty is the best uniform approximation error by trigonometric polynomials of degree at most n. This estimate highlights that the interpolation error is controlled by the logarithmic growth of \Lambda_n times the inherent approximation capability of the trigonometric polynomial space.[5]
For analytic functions, specifically those periodic and analytic in a neighborhood of the unit torus (or an annulus in the complex plane containing the unit circle), the trigonometric interpolant exhibits exponential convergence: E_n(f) = O(\rho^{-n}) for some \rho > 1 depending on the width of the analyticity strip, leading to \|f - T_n f\|_\infty = O((\log n) \rho^{-n}). This spectral-like rate ensures rapid decay without saturation, as the error adapts to the function's infinite smoothness.[32]
For functions with finite smoothness, such as those in the Sobolev space with k continuous derivatives, the best approximation error satisfies E_n(f) = O(1/n^k), yielding an overall interpolation error of O((\log n)/n^k) at equidistant nodes. This algebraic rate underscores the method's effectiveness for moderately smooth periodic functions, though the logarithmic factor slightly degrades the bound compared to the best approximation.[5]
Stability and the trigonometric Runge phenomenon
Trigonometric interpolation at high degrees can exhibit numerical instability, particularly for periodic functions with discontinuities or sharp features. When the underlying function f has a jump discontinuity, the interpolant develops pronounced oscillations near these points, similar to the Gibbs phenomenon observed in Fourier series approximations. Unlike smooth functions, where spectral convergence occurs, the overshoot in these oscillations reaches approximately 9% of the jump height and persists without diminishing as the degree n increases, leading to poor local accuracy despite global convergence in L^2 norms.[33] This effect is inherent to the projection onto trigonometric polynomials and highlights the sensitivity of high-degree interpolants to non-smoothness in f.
The trigonometric analog of Runge's phenomenon manifests in the use of equidistant nodes, where the growing Lebesgue constant \Lambda_n = O(\log n) amplifies small errors in function values, resulting in exaggerated oscillations, especially near the boundaries of the period for functions with limited analyticity. Although this growth is milder than the exponential divergence in polynomial interpolation on equidistant points, it still causes error amplification for large n, particularly when the periodized extension of f introduces artificial singularities close to the interpolation interval. In subperiodic settings—interpolation over a subinterval of the full period—equidistant nodes exacerbate this issue, as the bounded domain mimics endpoint effects akin to non-periodic problems.[34][35]
To mitigate these stability issues and the trigonometric Runge phenomenon, non-equidistant node distributions, such as Chebyshev-extended or Chebyshev-like angular nodes, can be employed. These nodes cluster points near the interval endpoints, yielding a Lebesgue constant bounded by \Lambda_n \leq \frac{2}{\pi} \log n + \frac{4}{\pi}, independent of the subinterval length, thereby reducing error amplification. Alternatively, regularization techniques like spectral filtering (e.g., the Erfclog filter) dampen high-frequency components, achieving exponential accuracy even for non-periodic functions adapted to trigonometric bases.[35][36]
Applications
In numerical analysis and quadrature
Trigonometric interpolation plays a key role in numerical quadrature for periodic functions, where the integral over one period, say [0, 2\pi], is approximated by integrating the trigonometric interpolating polynomial p_n of degree at most n. For equispaced nodes x_j = 2\pi j / N with N = 2n+1, this yields the trapezoidal rule \int_0^{2\pi} f(x) \, dx \approx \frac{2\pi}{N} \sum_{j=0}^{N-1} f(x_j), which is exact for trigonometric polynomials of degree at most n.[37] This approach leverages the spectral accuracy of the method, making it superior to the standard trapezoidal rule for non-smooth periodic functions due to its ability to capture higher-order behavior through the interpolant.
A related quadrature method is Clenshaw-Curtis, which, while typically applied to non-periodic functions on [-1, 1], can be interpreted through trigonometric interpolation via the substitution x = \cos \theta. Here, the nodes are x_k = \cos(\pi k / n) for k = 0, \dots, n, corresponding to equispaced points in \theta \in [0, \pi], and the approximation is \int_{-1}^{1} f(x) \, dx \approx \sum_{k=0}^{n} w_k f(x_k), where the weights w_k are derived from Chebyshev moments \int_{-1}^{1} T_m(x) \, dx (with T_m the Chebyshev polynomials) via fast Fourier transform techniques for efficiency.[38] This trigonometric perspective facilitates the computation of weights for weighted integrals, such as those involving densities \rho(x), by transforming the problem into angular quadrature.[38]
In addition to integration, trigonometric interpolation enables accurate numerical differentiation of periodic functions by explicitly differentiating the interpolating trigonometric polynomial p_n(x) = a_0/2 + \sum_{k=1}^{n} (a_k \cos(kx) + b_k \sin(kx)), yielding p_n'(x) = \sum_{k=1}^{n} k (-a_k \sin(kx) + b_k \cos(kx)), where the coefficients a_k, b_k are obtained via the discrete Fourier transform. Alternatively, finite difference schemes can be applied directly on the equispaced grid, but the spectral derivative from the interpolant provides higher accuracy for smooth components.[39] This method is particularly effective in spectral algorithms for solving periodic differential equations.
An illustrative application is the computation of Fourier coefficients for a periodic function f, where the k-th coefficient \hat{f}(k) = \frac{1}{\pi} \int_0^{2\pi} f(x) \cos(kx) \, dx (for cosine terms) is approximated by integrating f against \cos(kx) using the trigonometric interpolant-based quadrature, equivalent to extracting the DFT coefficient scaled appropriately. This approach efficiently approximates the continuous Fourier transform via the discrete version, with error controlled by the interpolation residual.
In signal processing and Fourier methods
Trigonometric interpolation plays a central role in signal processing due to its intimate connection with the discrete Fourier transform (DFT), which enables efficient computation of interpolants for periodic signals sampled at equidistant points. This relationship allows the representation of a signal as a finite sum of sines and cosines, facilitating analysis and manipulation in the frequency domain.[40]
In spectral methods, trigonometric interpolation serves as the foundational tool for pseudospectral solvers of partial differential equations (PDEs) on periodic domains, such as circles or tori. The interpolant \Pi_N f(x) = \sum_{k=-N/2}^{N/2-1} \tilde{f}_k e^{ikx}, where \tilde{f}_k = \frac{1}{N} \sum_{j=0}^{N-1} f_j e^{-ikx_j}, approximates smooth periodic functions with exponential convergence rates for the solution and its derivatives. This is particularly effective in collocation or Galerkin formulations for time-dependent PDEs, where fast Fourier transforms (FFTs) compute derivatives via \sum_{k} \tilde{f}_k (ik)^p e^{ikx_j} at grid points, achieving high accuracy for problems like the heat equation or Navier-Stokes on periodic boundaries while mitigating aliasing through zero-padding.[41]
A key application in filtering involves low-pass operations by truncating high-frequency coefficients in the trigonometric interpolant, which removes noise or unwanted oscillations without introducing significant distortion in the passband. For instance, retaining only low-frequency terms \tilde{f}_k for |k| \leq M < N/2 yields a smoother approximation, as demonstrated in DFT-based reconstructions where aliasing from high modes is suppressed, improving signal quality in applications like digital filtering.[40]
In audio processing, trigonometric interpolation enables resampling of periodic signals, such as waveforms in digital audio, by performing frequency-domain adjustments via DFT to avoid artifacts like spectral leakage or aliasing that plague linear methods. The LMN variant, a optimized trigonometric approach, resamples by inserting or removing zero-valued frequency bins before inverse DFT, achieving near-zero mean squared error across frequencies and preserving phase integrity without inflation-induced distortions. This connects to windowing functions, where trigonometric bases inherently mitigate edge effects in finite observations, ensuring artifact-free reconstruction for rate conversion in audio systems.[42]
For denoising noisy periodic signals, trigonometric interpolation coefficients can be thresholded to isolate dominant frequencies, as in Lasso trigonometric interpolation (LTI), which applies \ell_1-regularization to minimize \|f - P\|^2_2 + \lambda \|c\|_1 over trigonometric polynomials P, yielding sparser, noise-robust approximations. Numerical examples show LTI outperforming classical interpolation on signals like noisy sine or triangular waves, with error bounds \|f - P\| \leq C \lambda that scale favorably with noise levels, making it suitable for real-time signal cleaning in communications or sensor data.[43]