Fact-checked by Grok 2 weeks ago

Chebyshev nodes

In numerical analysis, Chebyshev nodes refer to a specific set of n distinct points on the interval [-1, 1] that serve as optimal interpolation sites for polynomial approximation of continuous functions. These nodes are the roots of the nth-degree Chebyshev polynomial of the first kind, T_n(x), which is defined by T_n(x) = \cos(n \arccos x) and satisfies the recurrence relation T_0(x) = 1, T_1(x) = x, T_{k+1}(x) = 2x T_k(x) - T_{k-1}(x) for k \geq 1. Explicitly, the nodes are given by x_k = \cos\left( \frac{(2k-1)\pi}{2n} \right) for k = 1, 2, \dots, n, resulting in a distribution that clusters denser near the endpoints \pm 1 with an asymptotic density proportional to (1 - x^2)^{-1/2}. Named after the 19th-century mathematician (1821–1894), who developed the underlying polynomials in works from the 1850s and 1860s to study best uniform approximations, these nodes gained prominence in modern numerical methods during the mid-20th century. Their key advantage stems from the minimax property of Chebyshev polynomials, ensuring that at these nodes yields the smallest possible maximum error bound among all choices of n points, specifically |f(x) - p(x)| \leq \frac{M}{2^{n-1} n!} where M bounds the nth derivative of f on [-1, 1]. This clustering prevents the Runge phenomenon—severe oscillations near endpoints that plague equispaced —and leads to a Lebesgue constant of approximately \frac{2}{\pi} \log(n+1) + 0.9625\ldots, which grows logarithmically rather than exponentially. Beyond interpolation, Chebyshev nodes of the first kind (which exclude the endpoints) are integral to efficient schemes like Fejér's first rule, while Clenshaw–Curtis integration uses the second-kind variant that includes the endpoints; both enable accurate approximation of definite with fewer evaluations than Gaussian rules in some cases. They also underpin barycentric Lagrange for stable computation and methods for partial differential equations, allowing nested grids that reuse function values across refinements. For intervals [a, b] other than [-1, 1], the nodes are affinely mapped via x \mapsto \frac{b-a}{2} x + \frac{a+b}{2}. Variants include Chebyshev points of the second kind, which incorporate the endpoints x_k = \cos\left( \frac{k\pi}{n} \right) for k = 0, \dots, n, often preferred for boundary-value problems.

Mathematical Background

Chebyshev Polynomials

Chebyshev polynomials of the first kind, denoted T_n(x), are a sequence of orthogonal polynomials named after the Russian mathematician , who developed them in the as part of his foundational work in approximation theory. These polynomials emerged from Chebyshev's investigations into mechanisms and function approximation, providing tools for minimizing errors in polynomial representations of continuous functions. The explicit trigonometric definition is T_n(x) = \cos(n \arccos x) for x \in [-1, 1]. They satisfy the initial conditions T_0(x) = 1 and T_1(x) = x, and for higher degrees, the three-term recurrence relation T_{n+1}(x) = 2x T_n(x) - T_{n-1}(x), \quad n \geq 1. $$ This recursive formula allows efficient computation and highlights their connection to multiple-angle formulas in trigonometry.[](https://www.ams.org/books/coll/023/) The polynomials are orthogonal on the interval $[-1, 1]$ with respect to the weight function $ w(x) = (1 - x^2)^{-1/2} $, satisfying \int_{-1}^{1} T_m(x) T_n(x) (1 - x^2)^{-1/2} , dx = \begin{cases} \pi & m = n = 0, \ \pi/2 & m = n \neq 0, \ 0 & m \neq n. \end{cases} [](https://www.ams.org/books/coll/023/) This orthogonality property, derived from the Chebyshev weight corresponding to the arcsine distribution, underpins their utility in spectral methods and series expansions.[](https://www.ams.org/books/coll/023/) A key characteristic is the equioscillation property: $ T_n(x) $ attains its maximum absolute value of 1 alternately at $ n+1 $ points in $[-1, 1]$, specifically at the extrema $ x_k = \cos(k\pi/n) $ for $ k = 0, 1, \dots, n $, where it alternates between $ +1 $ and $ -1 $. This minimax behavior makes $ T_n(x)/2^{n-1} $ (for $ n \geq 1 $) the monic polynomial of degree $ n $ with the smallest maximum norm on $[-1, 1]$, a cornerstone of best uniform approximation. The zeros of these polynomials define the Chebyshev nodes used in interpolation. ### Zeros and Extrema of Chebyshev Polynomials The zeros of the Chebyshev polynomial of the first kind $ T_n(x) $ are the values of $ x $ in the interval $[-1, 1]$ where $ T_n(x) = 0 $. These arise from the trigonometric representation $ T_n(\cos \theta) = \cos(n \theta) $, where setting $ \cos(n \theta) = 0 $ implies $ n \theta = \frac{(2k-1)\pi}{2} $ for integers $ k = 1, \dots, n $. Solving for $ \theta $ yields $ \theta_k = \frac{(2k-1)\pi}{2n} $, so the zeros are explicitly given by x_k = \cos\left( \frac{(2k-1)\pi}{2n} \right), \quad k = 1, \dots, n. This formula positions the $ n $ distinct zeros symmetrically within $(-1, 1)$, excluding the endpoints.[](https://mathworld.wolfram.com/ChebyshevPolynomialoftheFirstKind.html) The extrema of $ T_n(x) $, which are the local maxima and minima including the endpoints, occur where the polynomial reaches values of $ \pm 1 $ and equioscillates $ n+1 $ times across $[-1, 1]$. From the same trigonometric identity, these critical points correspond to the locations where $ \cos(n \theta) = \pm 1 $, achieved when $ n \theta = k \pi $ for integers $ k = 0, \dots, n $. Thus, $ \theta_k = \frac{k \pi}{n} $, leading to the explicit expressions x_k = \cos\left( \frac{k \pi}{n} \right), \quad k = 0, \dots, n. At these points, $ T_n(x_k) = (-1)^k $, confirming the alternation between [maximum and minimum](/page/Maximum_and_minimum) values. There are precisely $ n+1 $ such extrema, with $ x_0 = 1 $ and $ x_n = -1 $.[](https://mathworld.wolfram.com/ChebyshevPolynomialoftheFirstKind.html) Both the zeros and extrema exhibit clustering near the endpoints $ x = \pm 1 $, a consequence of the cosine mapping that projects equispaced points in $ \theta $ onto the interval with increasing density toward the boundaries. The asymptotic density of these points follows the [arcsine distribution](/page/Arcsine_distribution) $ \rho(x) \propto (1 - x^2)^{-1/2} $, resulting in inter-node spacing that scales as $ O(1/n^2) $ near $ x = \pm 1 $. Specifically, for the zeros, the point closest to the [endpoint](/page/Endpoint) 1 satisfies $ x_1 \approx 1 - \frac{\pi^2}{8 n^2} $, enhancing [numerical stability](/page/Numerical_stability) in applications like [interpolation](/page/Interpolation) by mitigating the Runge phenomenon.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) ## Definition ### Standard Chebyshev Nodes The standard Chebyshev nodes consist of the $n$ roots of the $n$th-degree Chebyshev polynomial of the first kind, $T_n(x)$, located within the interval $[-1, 1]$. These points are explicitly given by x_k = \cos\left( \frac{(2k-1)\pi}{2n} \right), \quad k = 1, 2, \dots, n. Unlike the Chebyshev extrema points, which are the locations where $|T_n(x)|$ attains its successive maxima and include the endpoints $\pm 1$, the standard nodes lie strictly in the open interval $(-1, 1)$ and exclude the boundaries. The extrema are instead provided by $\cos\left( \frac{j\pi}{n} \right)$ for $j = 0, 1, \dots, n$.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) These nodes are computed directly via the cosine formula, ensuring [numerical stability](/page/Numerical_stability) without resorting to polynomial root-solving algorithms, which can suffer from ill-conditioning for high degrees.[](https://personal.math.vt.edu/embree/math5466/lecture15.pdf) For illustration, when $n=3$, the nodes are $\cos(\pi/6) \approx 0.866$, $\cos(\pi/2) = 0$, and $\cos(5\pi/6) \approx -0.866$. ### Interval Projection To extend the standard Chebyshev nodes defined on the interval $[-1, 1]$ to an arbitrary finite interval $[a, b]$, an [affine transformation](/page/Affine_transformation) is employed. This linear mapping scales and shifts the nodes while maintaining their desirable distribution properties for numerical applications such as [interpolation](/page/Interpolation) and [quadrature](/page/Quadrature). The transformed nodes $ t_k $ are obtained via the formula t_k = \frac{b - a}{2} x_k + \frac{a + b}{2}, where $ x_k $ denote the standard Chebyshev nodes on $[-1, 1]$.[](https://depts.washington.edu/ph506/Boyd.pdf)[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) The rationale for this affine transformation lies in its preservation of key geometric and approximation-theoretic features of the original nodes. Specifically, it maintains the relative spacing, including the characteristic clustering toward the endpoints, and upholds the [minimax](/page/Minimax) optimality, ensuring that the maximum [interpolation](/page/Interpolation) error remains minimized in the [uniform norm](/page/Uniform_norm) over the new [interval](/page/Interval). This property arises because Chebyshev polynomials transform under affine changes of variables in a way that scales the error bound proportionally to the [interval](/page/Interval) length $(b - a)/2$, without altering the equioscillation behavior central to their optimality.[](https://depts.washington.edu/ph506/Boyd.pdf) A concrete example illustrates the projection: for the unit interval $[0, 1]$, the mapping simplifies to t_k = \frac{1}{2} (1 + x_k), which shifts and scales the nodes accordingly while retaining their non-uniform distribution.[](https://depts.washington.edu/ph506/Boyd.pdf) From a numerical perspective, the transformation ensures continued clustering of nodes near $a$ and $b$, enhancing resolution for functions with boundary-layer behavior or rapid variations at the ends of the interval. Importantly, since standard Chebyshev nodes (the zeros of the polynomials) exclude the endpoints $\pm 1$, the projected nodes similarly avoid inclusion of $a$ and $b$ unless a modified variant, such as one incorporating extrema, is selected for boundary conditions requiring endpoint evaluation. This avoids potential ill-conditioning at boundaries in pseudospectral methods while preserving spectral accuracy.[](https://depts.washington.edu/ph506/Boyd.pdf)[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) ## Properties ### Clustering Near Endpoints One distinctive feature of Chebyshev nodes is their non-uniform [distribution](/page/Distribution) across the [interval](/page/Interval) $[-1, 1]$, with points becoming progressively denser near the endpoints $x = \pm 1$. This clustering arises from the projection of equally spaced [angles](/page/Angles) onto the [interval](/page/Interval) via the cosine function, as the nodes are given by $x_k = \cos \theta_k$ where the $\theta_k$ are uniformly spaced.[](https://people.maths.ox.ac.uk/trefethen/barycentric.pdf) The resulting density of points follows an [arcsine distribution](/page/Arcsine_distribution), asymptotically proportional to $\frac{n}{\pi \sqrt{1 - x^2}}$ for large $n$, which concentrates approximately half the nodes within a fixed distance from each endpoint.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) Near the endpoint $x = 1$, the spacing between consecutive nodes $\Delta x_k$ satisfies the asymptotic relation $\Delta x_k \approx \frac{\pi}{n} \sqrt{1 - x_k^2}$, or more precisely near the boundary, $\Delta x_k \approx \frac{\pi}{n} \sqrt{2(1 - x_k)}$. This leads to the distance of the nearest node from the endpoint being $O(1/n^2)$, as the first node is located at approximately $1 - \frac{\pi^2}{8n^2}$. In contrast, uniform nodes on $[-1, 1]$ exhibit constant spacing of $2/n$, which fails to adapt to the oscillatory behavior of high-degree polynomials and induces the Runge phenomenon—large oscillations near the endpoints in interpolants.[](https://people.maths.ox.ac.uk/trefethen/barycentric.pdf) Chebyshev nodes mitigate this by aligning their density with the equilibrium measure for polynomial approximation, ensuring more stable interpolation.[](https://people.maths.ox.ac.uk/trefethen/barycentric.pdf) Visualizations of Chebyshev nodes for increasing $n$ reveal this arcsin-like distribution clearly, with points appearing sparser in the interval's interior and increasingly packed toward the boundaries, resembling the projection of a [semicircle](/page/Semicircle).[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) In modern computational contexts, such as spectral methods for solving differential equations, this endpoint clustering enhances numerical conditioning by better resolving boundary layers and reducing ill-conditioning in [differentiation](/page/Differentiation) matrices, where equispaced grids would amplify errors exponentially.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) The resulting matrices remain well-conditioned with condition numbers growing as $O(n^2)$ for the first-order [differentiation](/page/Differentiation) matrix (and $O(n^{2k})$ for the $k$-th [derivative](/page/Derivative)), facilitating reliable high-order approximations in bounded domains. ### Minimax Error Bound One of the primary theoretical advantages of Chebyshev nodes in [polynomial interpolation](/page/Polynomial_interpolation) lies in their [minimax](/page/Minimax) property, which minimizes the maximum interpolation error relative to the best possible polynomial approximation of the same [degree](/page/Degree). This optimality is quantified by the Lebesgue constant $\Lambda_n$, defined as the maximum of the Lebesgue function $\lambda_n(x) = \sum_{k=0}^n |l_k(x)|$, where $l_k(x)$ are the Lagrange basis polynomials associated with the nodes. For Chebyshev nodes on $[-1, 1]$, $\Lambda_n = \frac{2}{\pi} \log n + O(1)$, exhibiting [logarithmic growth](/page/Logarithmic_growth) with the [degree](/page/Degree) $n$. In [contrast](/page/Contrast), for equidistant nodes, $\Lambda_n$ grows exponentially as approximately $\frac{2^n}{e n \log n}$, leading to severe instability known as the Runge phenomenon.[](https://epubs.siam.org/doi/10.1137/0717043) The interpolation error for a function $f$ using the degree-$n$ polynomial $p_n$ at Chebyshev nodes satisfies |f(x) - p_n(x)| \leq (1 + \Lambda_n) \inf_{q \in \Pi_n} |f - q|_\infty for all $x \in [-1, 1]$, where $\Pi_n$ denotes the space of polynomials of degree at most $n$, and $\| \cdot \|_\infty$ is the [uniform norm](/page/Uniform_norm). This bound shows that the error is at most a factor of $1 + \Lambda_n$ larger than the best uniform [approximation error](/page/Approximation_error) by any degree-$n$ [polynomial](/page/Polynomial), making Chebyshev nodes nearly optimal for controlling the worst-case deviation.[](https://journalofinequalitiesandapplications.springeropen.com/articles/10.1186/s13660-016-1030-3) This [minimax](/page/Minimax) property arises from the equioscillation theorem in approximation theory, which characterizes unique [polynomials](/page/Polynomial) of minimal [uniform norm](/page/Uniform_norm) through equal-ripple behavior. Specifically, among all [monic polynomials](/page/Monic_polynomial) of degree $n+1$, the nodal [polynomial](/page/Polynomial) $\omega_{n+1}(x) = \prod_{k=0}^n (x - x_k)$ for Chebyshev nodes $x_k = \cos\left(\frac{(2k+1)\pi}{2(n+1)}\right)$ is a scaled shifted Chebyshev [polynomial](/page/Polynomial) $T_{n+1}$, given by $\omega_{n+1}(x) = 2^{-n} T_{n+1}(x)$, achieving the minimal maximum norm $\|\omega_{n+1}\|_\infty = 2^{-n}$. The equioscillation of $T_{n+1}$ at $n+2$ points ensures no other [monic polynomial](/page/Monic_polynomial) of degree $n+1$ has a smaller [uniform norm](/page/Uniform_norm).[](https://math.umd.edu/~mariakc/AMSC466/LectureNotes/interpolation.pdf) Furthermore, the Markov brothers' inequality provides sharp bounds on the maximum of the $k$-th derivative of a degree-$n$ [polynomial](/page/Polynomial) $p$ on $[-1, 1]$, stating $\|p^{(k)}\|_\infty \leq \frac{n^{2k}(n^2 - 1^2) \cdots (n^2 - (k-1)^2)}{(2k-1)!!} \|p\|_\infty$, with equality achieved for scaled Chebyshev polynomials. This inequality extends to bounds on derivatives evaluated at or near Chebyshev nodes, ensuring controlled error growth in higher-order approximations and reinforcing the [stability](/page/Stability) of [interpolation](/page/Interpolation) at these points.[](https://www.damtp.cam.ac.uk/user/na/people/Alexei/papers/markov.pdf) ## Applications ### Polynomial Interpolation Chebyshev nodes are particularly effective for [polynomial interpolation](/page/Polynomial_interpolation) because they mitigate the Runge phenomenon, where high-degree polynomials interpolated at equidistant points exhibit large oscillations near the endpoints of the interval. By clustering points near the boundaries, Chebyshev nodes distribute the interpolation error more uniformly, leading to stable approximations for smooth functions on [-1, 1]. The interpolating [polynomial](/page/Polynomial) of degree at most $n-1$ through $n$ points $(x_k, f_k)$, where $x_k$ are the Chebyshev nodes, can be expressed in Lagrange form as p(x) = \sum_{k=1}^n f_k l_k(x), \quad l_k(x) = \prod_{\substack{j=1 \ j \neq k}}^n \frac{x - x_j}{x_k - x_j}. Direct evaluation of this form is numerically unstable for large $n$ due to exponential ill-conditioning in the products. A stable alternative is the barycentric form, which rewrites the interpolant as p(x) = \frac{\sum_{k=1}^n \frac{w_k f_k}{x - x_k}}{\sum_{k=1}^n \frac{w_k}{x - x_k}}, where the barycentric weights are given by w_k = \frac{1}{\prod_{\substack{j=1 \ j \neq k}}^n (x_k - x_j)}. For Chebyshev nodes of the first kind, these weights can be computed explicitly as $ w_k = (-1)^{k-1} \sin\left( \frac{(2k-1)\pi}{2n} \right) $, up to a common scaling factor that cancels in the barycentric formula, enabling $O(n)$ evaluation per point after $O(n^2)$ preprocessing. This approach avoids explicit computation of the Lagrange basis and maintains accuracy in [floating-point arithmetic](/page/Floating-point_arithmetic).[](https://people.maths.ox.ac.uk/trefethen/barycentric.pdf) A classic illustration is the interpolation of Runge's function $f(x) = 1/(1 + 25x^2)$ on $[-1, 1]$ using a degree-8 [polynomial](/page/Polynomial) ($n=9$ points). With [equidistant](/page/Equidistant) nodes, the maximum error reaches approximately 0.1 due to oscillations near $\pm 1$. In contrast, Chebyshev nodes reduce the maximum error to about $10^{-3}$, demonstrating significantly improved uniform approximation without endpoint divergence.[](https://people.maths.ox.ac.uk/trefethen/barycentric.pdf) The stability of [interpolation](/page/Interpolation) at Chebyshev nodes is quantified by the Lebesgue constant $\Lambda_n = \max_{x \in [-1,1]} \sum_{k=1}^n |l_k(x)|$, which bounds the error as $\|f - p\|_\infty \leq (1 + \Lambda_n) \inf_{q \in \Pi_{n-1}} \|f - q\|_\infty$. For Chebyshev nodes, $\Lambda_n = O(\log n)$, ensuring the interpolation [operator](/page/Operator) remains well-conditioned even for moderate $n$, unlike the exponential growth for [equidistant](/page/Equidistant) nodes.[](https://epubs.siam.org/doi/10.1137/S0036144502417715) The following pseudocode implements barycentric interpolation (assuming distinct evaluation points $x$ and precomputed weights $w_k$, function values $f_k$): ``` function p = barycentric_interp(x, xj, fj, wj) n = length(xj); p = zeros(size(x)); for i = 1:length(x) if any(x(i) == xj) p(i) = fj(find(x(i) == xj)); % Exact at nodes continue; end num = 0; den = 0; for k = 1:n tk = wj(k) / (x(i) - xj(k)); num = num + tk * fj(k); den = den + tk; end p(i) = num / den; end end ``` This routine evaluates the interpolant in $O(n)$ operations per point and is second-kind stable for Chebyshev nodes.[](https://epubs.siam.org/doi/10.1137/S0036144502417715) Compared to Legendre nodes (zeros of the Legendre polynomial), Chebyshev nodes yield slightly smaller Lebesgue constants, making them preferable for uniform-norm minimization in [interpolation](/page/Interpolation). Both grow as $(2/\pi) \log(n) + O(1)$, but Chebyshev's constant term is marginally lower, enhancing practical [stability](/page/Stability) for minimax error control.[](https://www.sciencedirect.com/science/article/pii/S0898122199002047) ### Numerical Integration Clenshaw-Curtis quadrature is a [numerical integration](/page/Numerical_integration) method that employs the extrema of the Chebyshev [polynomial](/page/Polynomial) of the first kind, specifically the points $ x_k = \cos\left( \frac{k \pi}{n} \right) $ for $ k = 0, 1, \dots, n $ on the interval $[-1, 1]$, to approximate the integral $ \int_{-1}^{1} f(x) \, dx $. These points, known as Chebyshev-Lobatto points, cluster near the endpoints, providing [stability](/page/Stability) for high-degree approximations. The exact weights $w_k$ are computed using the [discrete cosine transform](/page/Discrete_cosine_transform) from the Chebyshev expansion coefficients, given by $ w_k = \frac{2}{n} \sum_{m=0}^{\lfloor n/2 \rfloor} \frac{1 + (-1)^{m(n-k)} \delta_{m,n/2}}{1 - (2m)^2} \cos\left( \frac{2 \pi m k}{n} \right) $ with adjustments for endpoints and [parity](/page/Parity), ensuring the rule is exact for [polynomials](/page/Polynomial) of degree at most $n$. The method derives its efficiency from the connection to Chebyshev series expansions of the integrand. By expressing $ f(x) $ in terms of Chebyshev polynomials $ T_j(x) $, the integral can be approximated by summing the series coefficients, leveraging the unweighted integrals $ \int_{-1}^{1} T_j(x) \, dx = 0 $ for odd $j$, $2$ for $j=0$, and $ \frac{2}{1-j^2} $ for even $j \geq 2 $. Aliasing in the discrete cosine transform used to compute coefficients enhances accuracy, often achieving near-optimal performance comparable to Gaussian quadrature despite using non-optimal nodes.[](https://people.maths.ox.ac.uk/trefethen/publication/PDF/2008_127.pdf) For analytic functions on $[-1, 1]$, Clenshaw-Curtis [quadrature](/page/Quadrature) exhibits exponential [convergence](/page/Convergence), with error bounds decaying as $ O(\rho^{-n}) $ where $ \rho > 1 $ depends on the distance to the nearest singularity in the [complex plane](/page/Complex_plane). For example, approximating $ \int_{-1}^{1} e^x \, dx $ with $ n = 10 $ (11 points) yields an error less than $ 10^{-6} $, demonstrating rapid [convergence](/page/Convergence) for entire functions.[](https://people.maths.ox.ac.uk/trefethen/publication/PDF/2008_127.pdf) Adaptations of Clenshaw-Curtis [quadrature](/page/Quadrature) extend to infinite or semi-infinite intervals through variable substitutions that map the domain to $[-1, 1]$, such as $ x = \tan(\theta) $ for $ (-\infty, \infty) $. This contrasts with Gauss-Chebyshev [quadrature](/page/Quadrature), which is tailored for the specific weight function $ (1 - x^2)^{-1/2} $ using zeros of the Chebyshev polynomial and requires weight incorporation, whereas Clenshaw-Curtis handles uniform weights more directly post-substitution.[](https://www.sciencedirect.com/science/article/abs/pii/S0377042723003941) ## Variants ### Modified Even-Order Nodes The concept of modified even-order Chebyshev nodes is not standardly defined in primary literature on the topic. Standard Chebyshev nodes of the first kind exclude endpoints for any n, while variants like Chebyshev-Lobatto include them regardless of parity. For applications requiring endpoint inclusion with even n, Chebyshev-Lobatto points are typically used instead.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) ### Chebyshev-Lobatto Nodes Chebyshev-Lobatto nodes consist of $n+1$ points on the interval $[-1, 1]$ defined by the formula x_k = \cos\left( \frac{k \pi}{n} \right), \quad k = 0, 1, \dots, n. These points represent the extrema of the $n$th-degree Chebyshev polynomial of the first kind, $T_n(x)$, and are equivalently the roots of the polynomial equation $(1 - x^2) U_{n-1}(x) = 0$, where $U_{n-1}(x)$ denotes the $(n-1)$th-degree Chebyshev polynomial of the second kind.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf)[](https://www.math.purdue.edu/~shen7/sp_intro12/book.pdf) A primary advantage of Chebyshev-Lobatto nodes is their natural inclusion of the endpoints $x = \pm 1$, which simplifies the imposition of [boundary](/page/Boundary) conditions in numerical methods. This feature makes them particularly suitable for collocation-based [spectral](/page/Spectral) methods applied to [boundary](/page/Boundary) value problems in [ordinary](/page/Ordinary) [differential](/page/Differential) equations (ODEs) and partial differential equations (PDEs), where accurate resolution near boundaries is essential.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf) In such contexts, the nodes enable efficient [polynomial interpolation](/page/Polynomial_interpolation) and derivative [approximation](/page/Approximation) while mitigating the Runge [phenomenon](/page/Phenomenon) through clustering near the endpoints.[](https://www.math.purdue.edu/~shen7/sp_intro12/book.pdf) For numerical integration, Chebyshev-Lobatto nodes form the basis of the Clenshaw-Curtis [quadrature](/page/Quadrature) rule, which approximates $\int_{-1}^{1} f(x) \, dx$ using the formula \int_{-1}^{1} f(x) , dx \approx \sum_{k=0}^{n} w_k f(x_k), where the weights $w_k$ are given by $ w_k = \frac{2}{n} \sum_{m=0}^{\lfloor n/2 \rfloor} \frac{1 - (-1)^{m}}{1 - 4m^2} \cos\left( \frac{2\pi m k}{n} \right) $ for interior points (with adjustments at endpoints), or more efficiently computed via the [discrete cosine transform](/page/Discrete_cosine_transform). This rule is exact for polynomials of degree at most $n$ and offers near-optimal accuracy comparable to Gauss-Legendre quadrature for smooth integrands.[](https://epubs.siam.org/doi/10.1137/S0036144502415282) As an illustrative example, for $n=3$ (yielding 4 points), the nodes are $x_0 = 1$, $x_1 = \cos(\pi/3) \approx 0.5$, $x_2 = \cos(2\pi/3) \approx -0.5$, and $x_3 = -1$. The corresponding weights are $w_0 = w_3 = 1/9 \approx 0.111$ and $w_1 = w_2 = 8/9 \approx 0.889$.[](https://epubs.siam.org/doi/10.1137/S0036144502415282) In spectral element methods, Chebyshev-Lobatto nodes facilitate the construction of differentiation matrices for approximating spatial derivatives. The first-derivative matrix $D_n$ has off-diagonal entries (D_n)_{i j} = \frac{c_i (-1)^{i+j}}{x_i - x_j}, \quad i \neq j, and diagonal entries (D_n){i i} = -\sum{j \neq i} (D_n)_{i j}, where $c_0 = c_n = 2$ and $c_i = 1$ for $i = 1, \dots, n-1$. For interior points, this yields $ (D_n)_{j j} = -\frac{x_j}{1 - x_j^2} $. Boundary values are handled via the limit of the off-diagonal formula. These matrices enable stable and efficient computation of derivatives in collocation schemes for PDEs, often leveraging fast Fourier transforms for implementation.[](https://people.maths.ox.ac.uk/trefethen/8all.pdf)[](https://www.math.purdue.edu/~shen7/sp_intro12/book.pdf)

References

  1. [1]
    [PDF] chebyshev interpolation - Math (Princeton)
    ) , for k = 1,...,m are called Chebyshev nodes (of the first kind). They are roots of. the degree m Chebyshev polynomial (of the first kind) defined by.
  2. [2]
    None
    ### Summary of Chebyshev Nodes from Barycentric Lagrange Interpolation
  3. [3]
    4. Chebfun and Approximation Theory
    The history of "Chebyshev technology" goes back to the 19th century ... Parker, Chebyshev Polynomials in Numerical Analysis, Oxford U. Press, 1968 ...
  4. [4]
    [PDF] The Chebyshev points of the first kind
    Jan 11, 2016 · Both kinds of points have been useful in many areas of numerical analysis and scientific computing, such as function approximation and spectral ...
  5. [5]
    Pafnuty Chebyshev (1821 - 1894) - Biography - MacTutor
    His work arose out of the theory of least squares approximation and probability; he applied his results to interpolation, approximate quadrature and other areas ...
  6. [6]
    [PDF] Pafnuty Chebyshev, Steam Engines, and Polynomials - OU Math
    In this paper Chebyshev describes a new method for approximating functions by polynomials, and gives a couple of results concerning Watt linkages that he says ...
  7. [7]
    AMS eBooks: Colloquium Publications
    The book by Szegő, originally published in 1939, is the first monograph devoted to the theory of orthogonal polynomials and its applications in many areas, ...
  8. [8]
    Chebyshev Polynomial of the First Kind -- from Wolfram MathWorld
    The Chebyshev polynomials of the first kind are a set of orthogonal polynomials defined as the solutions to the Chebyshev differential equation and denoted T_n ...
  9. [9]
    [PDF] Chapter 8. Chebyshev spectral methods - People
    Spectral methods on bounded domains typically employ grids consisting of zeros or extrema of Chebyshev polynomials (\Chebyshev points"), zeros or extrema of ...
  10. [10]
    [PDF] lecture 15: Chebyshev Polynomials for Optimal Interpolation
    a formula related to the three term recurrence used to construct orthogonal polynomials. In fact, Chebyshev polynomials are orthogonal polynomials on [1, 1] ...
  11. [11]
    [PDF] Chebyshev and Fourier Spectral Methods 2000
    Page 1. Chebyshev and Fourier Spectral Methods. Second Edition. John P. Boyd. University of Michigan. Ann Arbor, Michigan 48109-2143 email: jpboyd@engin.umich ...
  12. [12]
  13. [13]
    Evaluation of Lebesgue Constants - SIAM.org
    Asymptotic expressions of the form ( 2 / 𝜋 ) ⁢ l o g ⁡ 𝑛 + 𝑐 + 𝑟 𝑛 are investigated for the Lebesgue constants associated with interpolation at the ...
  14. [14]
    Lebesgue functions and Lebesgue constants in polynomial ...
    Mar 12, 2016 · where p n ∗ is the best polynomial approximation to f on [ − 1 , 1 ] , and therefore Λ n quantifies how much larger the interpolation error ‖ f ...
  15. [15]
    [PDF] interpolation.pdf - UMD MATH
    Tn+1 = n. Y k=0. (x − xk) of degree n + 1 has the smallest possible uniform (maximum) norm 2−n in [−1,1] among all polynomials of degree n + 1. I.e.,. 2. −n. = ...
  16. [16]
    [PDF] Twelve Proofs of the Markov Inequality 1 Introduction
    This is the story of the classical Markov inequality for the k-th deriva- tive of an algebraic polynomial, and of the remarkably many attempts to.
  17. [17]
    Barycentric Lagrange Interpolation | SIAM Review
    Barycentric interpolation is a variant of Lagrange polynomial interpolation ... A polynomial interpolation process at quasi-Chebyshev nodes with the FFT.
  18. [18]
    A numerical comparison of seven grids for polynomial interpolation ...
    Seven types of Chebyshev-like grids in one dimension are compared according to four different criteria for accuracy. The grid which minimizes the Lebesgue ...
  19. [19]
    [PDF] Is Gauss Quadrature Better than Clenshaw–Curtis? - People
    The idea of Clenshaw–Curtis quadrature is to use Chebyshev points instead of optimal nodes. There are three main variations on this theme (see [44] for others):
  20. [20]
    An automatic quadrature method for semi-infinite integrals of ...
    The method is an automatic quadrature scheme for semi-infinite integrals of exponentially decaying functions, using Clenshaw-Curtis rules, implemented in ...
  21. [21]
    [PDF] Spectral and High-Order Methods with Applications - Purdue Math
    Aug 1, 2012 · This book expands lecture notes by the authors for a course on Introduction of Spec- tral Methods taught in the past few years at Penn State ...<|separator|>
  22. [22]
    [PDF] Is Gauss Quadrature Better than Clenshaw–Curtis? - People
    Here we shall compare it with Clenshaw–Curtis quadrature, a family of formulas based on sampling the integrand at Chebyshev points that can be implemented in O( ...