Chebyshev polynomials
Chebyshev polynomials are a class of orthogonal polynomials introduced by the Russian mathematician Pafnuty Lvovich Chebyshev in his 1854 paper "Théorie des mécanismes connus sous le nom de parallélogrammes," where they arose in the context of studying mechanisms and approximation problems.[1] Defined on the interval [-1, 1], they consist of two main families: the polynomials of the first kind, denoted T_n(x), given by T_n(x) = \cos(n \arccos x) for x \in [-1, 1], and the polynomials of the second kind, U_n(x), defined similarly via \sin((n+1) \arccos x) / \sin(\arccos x).[2] These polynomials satisfy a three-term recurrence relation, T_{n+1}(x) = 2x T_n(x) - T_{n-1}(x) with T_0(x) = 1 and T_1(x) = x, and are orthogonal with respect to the weight function $1/\sqrt{1 - x^2} for T_n[3] and \sqrt{1 - x^2} for U_n.[4] A defining feature of Chebyshev polynomials is their minimax property: among all monic polynomials of degree n, T_n(x)/2^{n-1} has the smallest possible maximum norm on [-1, 1], equaling $1/2^{n-1}, which makes them ideal for uniform approximation and error minimization in numerical methods.[5] Their roots and extrema are explicitly known—the roots of T_n(x) are at x_k = \cos((2k-1)\pi / (2n)) for k = 1, \dots, n, all real and distinct in (-1, 1)—facilitating applications in interpolation, quadrature, and spectral analysis.[2] Chebyshev polynomials also appear in generating functions, such as \sum_{n=0}^\infty T_n(x) t^n = (1 - t x)/(1 - 2 t x + t^2), and have connections to trigonometric identities via de Moivre's theorem.[2] In applied mathematics, Chebyshev polynomials underpin techniques like Chebyshev spectral methods for solving differential equations, where they enable efficient discretization on non-uniform grids via the Chebyshev-Gauss quadrature, and in filter design for signal processing due to their oscillatory behavior.[5] Their role in approximation theory extends to modern computational tools, such as the Chebfun software system, which leverages them for numerical computations with functions represented as Chebyshev series.[6] Chebyshev's foundational work on these polynomials not only advanced mechanism theory but also laid the groundwork for the St. Petersburg school of mathematics, influencing probability, number theory, and numerical analysis.[1]Definitions
Recurrence relations
The Chebyshev polynomials of the first kind T_n(x) are defined by the three-term linear recurrence relation T_{n+1}(x) = 2x T_n(x) - T_{n-1}(x), \quad n \geq 1, with initial conditions T_0(x) = 1 and T_1(x) = x.[7] Applying the recurrence yields, for instance, T_2(x) = 2x^2 - 1.[7] The Chebyshev polynomials of the second kind U_n(x) satisfy an analogous recurrence U_{n+1}(x) = 2x U_n(x) - U_{n-1}(x), \quad n \geq 1, with initial conditions U_0(x) = 1 and U_1(x) = 2x.[7] For n=2, this gives U_2(x) = 4x^2 - 1.[7] These relations generate the polynomials iteratively: starting from the degree-zero and degree-one terms, each subsequent polynomial is obtained via a simple linear combination of the two predecessors, requiring only a fixed number of arithmetic operations per step.[8] This approach is computationally efficient, achieving evaluation of T_n(x) or U_n(x) in O(n) time and exhibiting numerical stability when x \in [-1, 1].[9]Trigonometric identities
Chebyshev polynomials of the first kind, denoted T_n(x), are fundamentally defined through their trigonometric representation as T_n(x) = \cos(n \arccos x) for x \in [-1, 1].[3] This definition establishes a direct connection to multiple-angle formulas for the cosine function, originating from the work of Pafnuty Chebyshev in the 19th century on trigonometric series and approximation theory.[10] For instance, the double-angle formula \cos(2\theta) = 2\cos^2(\theta) - 1 yields T_2(x) = 2x^2 - 1 upon substituting x = \cos \theta.[11] Higher-degree polynomials follow similarly from iterative applications of multiple-angle identities, such as \cos(3\theta) = 4\cos^3(\theta) - 3\cos(\theta), giving T_3(x) = 4x^3 - 3x.[11] The trigonometric form extends beyond the interval [-1, 1] via analytic continuation to the complex plane, where for |x| > 1, T_n(x) = \cosh(n \arccosh x).[3] This hyperbolic representation maintains the polynomial's properties while allowing evaluation outside the real unit interval. Chebyshev polynomials of the second kind, denoted U_n(x), are similarly expressed trigonometrically as U_n(x) = \frac{\sin((n+1) \arccos x)}{\sin(\arccos x)} for x \in [-1, 1], excluding points where the denominator vanishes.[4] This form arises from sine multiple-angle formulas, paralleling the cosine basis for the first kind. At the endpoints of the interval, these polynomials exhibit specific values: T_n(1) = 1 and T_n(-1) = (-1)^n for all n; likewise, U_n(1) = n+1 and U_n(-1) = (-1)^n (n+1).[3][4] These boundary behaviors underscore their oscillatory nature within [-1, 1], mimicking cosine and sine waves scaled to the interval.Generating functions
The generating functions for Chebyshev polynomials offer a formal power series representation that encapsulates the entire sequence in a closed rational form, aiding in coefficient analysis and links to broader analytic structures. For the first kind, the generating function is \sum_{n=0}^{\infty} T_n(x) t^n = \frac{1 - t x}{1 - 2 t x + t^2}, which converges for |t| < 1 when |x| \leq 1, and for |t| < \frac{1}{|x| + \sqrt{x^2 - 1}} when |x| > 1 (assuming x real and greater than 1; analogous for x < -1). For the second kind, it is \sum_{n=0}^{\infty} U_n(x) t^n = \frac{1}{1 - 2 t x + t^2}, with the same convergence conditions. These functions derive from the trigonometric characterizations of the polynomials. With x = \cos \theta and \theta \in [0, \pi], T_n(x) = \cos(n \theta), so the series \sum_{n=0}^{\infty} T_n(x) t^n = \sum_{n=0}^{\infty} t^n \cos(n \theta). This sums via the formula for the real part of a geometric series: \sum_{n=0}^{\infty} r^n e^{i n \theta} = 1 / (1 - r e^{i \theta}) for r = t and |t| < 1, yielding the rational expression after taking the real component and simplifying.[8] A parallel derivation holds for U_n(x) = \sin((n+1) \theta) / \sin \theta, where the series becomes \sum_{n=0}^{\infty} t^n \sin((n+1) \theta) / \sin \theta; the numerator sums to \sin \theta / (1 - 2 t \cos \theta + t^2) via the imaginary part of the geometric series, and the overall form simplifies to the reciprocal of the denominator.[8] The common denominator $1 - 2 t x + t^2 factors as (1 - t e^{i \theta})(1 - t e^{-i \theta}) under x = \cos \theta, explicitly connecting it to products of geometric series terms in the complex plane.[3] Such generating functions allow extraction of individual coefficients through the Cauchy integral formula, for instance, T_n(x) = \frac{1}{2\pi i} \oint \frac{1 - t x}{(1 - 2 t x + t^2) t^{n+1}} \, dt over a contour encircling the origin within the radius of convergence. They further support series manipulations to derive identities and, briefly, underpin proofs of orthogonality by generating integral representations.Explicit expressions
Closed-form formulas
Chebyshev polynomials of the first kind, T_n(x), admit an explicit summation formula as a polynomial of degree n: T_n(x) = \sum_{k=0}^{\lfloor n/2 \rfloor} (-1)^k \binom{n}{2k} (1 - x^2)^k x^{n - 2k}. This expression arises from the binomial expansion underlying the trigonometric identity T_n(\cos \theta) = \cos(n\theta).[3] Similarly, Chebyshev polynomials of the second kind, U_n(x), have the explicit form U_n(x) = \sum_{k=0}^{\lfloor n/2 \rfloor} (-1)^k \binom{n - k}{k} (2x)^{n - 2k}, which is also a polynomial of degree n.[4] Both T_n(x) and U_n(x) can be expressed using the hypergeometric function {}_2F_1. Specifically, T_n(x) = {}_2F_1\left(-n, n; \frac{1}{2}; \frac{1 - x}{2}\right), and U_n(x) = (n + 1) {}_2F_1\left(-n, n + 2; \frac{3}{2}; \frac{1 - x}{2}\right). These representations highlight their connections to classical special functions.[3][4] The leading coefficient of T_n(x) is $2^{n-1} for n \geq 1 (with T_0(x) = 1), while for U_n(x) it is $2^n.[3][4]Rodrigues-type formulas
The Rodrigues-type formulas provide differential representations for Chebyshev polynomials, analogous to Rodrigues' formula for other classical orthogonal polynomials. These expressions are particularly valuable in theoretical contexts, such as deriving properties through repeated differentiation and integration by parts, and they arise naturally from the Sturm-Liouville theory associated with the Chebyshev differential equation. In this framework, the polynomials are generated as solutions to a self-adjoint boundary value problem on the interval [-1, 1], where the weight function and the form of the operator lead to these explicit derivative-based definitions.[12] For the Chebyshev polynomials of the first kind T_n(x), the Rodrigues-type formula is T_n(x) = (-1)^n \sqrt{1 - x^2} \frac{1}{(2n-1)!!} \frac{d^n}{dx^n} (1 - x^2)^{n - 1/2}, where (2n-1)!! = 1 \cdot 3 \cdot 5 \cdots (2n-1) denotes the double factorial, equivalent to \frac{(2n)!}{2^n n!}. This representation highlights the role of the weight function w(x) = (1 - x^2)^{-1/2} in the orthogonality integral, as the formula can be viewed as T_n(x) = c_n w(x)^{-1} \frac{d^n}{dx^n} \left[ w(x) (1 - x^2)^n \right] with an appropriate normalization constant c_n.[13] For the Chebyshev polynomials of the second kind U_n(x), the corresponding formula is U_n(x) = (-1)^n \frac{2^n (n+1)!}{(2n+1)!} (1 - x^2)^{-1/2} \frac{d^n}{dx^n} (1 - x^2)^{n + 1/2}. Here, the inverse weight (1 - x^2)^{-1/2} appears explicitly, reflecting the orthogonality with respect to w(x) = \sqrt{1 - x^2}. This form facilitates generalizations and connections to other polynomial families, as it stems from the special case \alpha = \beta = 1/2 of Jacobi polynomials via the relation U_n(x) = \frac{2^{2n}}{\binom{2n+1}{n}} P_n^{(1/2, 1/2)}(x).[13] These formulas are instrumental in Sturm-Liouville theory, where the Chebyshev polynomials emerge as eigenfunctions of singular differential operators of the form \frac{d}{dx} \left[ p(x) \frac{d}{dx} \right] + q(x) + \lambda w(x), with p(x) = (1 - x^2)^{1/2} for the second kind and adjusted accordingly for the first kind; the Rodrigues representation allows direct construction of the eigenfunctions without solving the differential equation explicitly.[12] Furthermore, by taking limits in the parameter space of the underlying Jacobi polynomials (specifically, as the parameters \alpha, \beta \to 0), the Chebyshev Rodrigues formulas connect to those of Legendre polynomials, providing a pathway for asymptotic analysis and uniform approximations across classical families.Relations between kinds
Connections via identities
Chebyshev polynomials of the first kind, T_n(x), and the second kind, U_n(x), are interconnected through several fundamental identities that highlight their unified role in approximation theory and orthogonal polynomial systems. Pafnuty Chebyshev originally developed these polynomials in the mid-19th century as part of his investigations into the best uniform approximation of functions by polynomials, particularly in the context of minimizing maximum deviations on the interval [-1, 1]. His work, such as the 1854 paper "Théorie des mécanismes connus sous le nom de parallélogrammes," demonstrated how these polynomials arise naturally from trigonometric substitutions and linkages in mechanisms, providing a basis for their mutual relations that facilitate efficient computational methods in numerical analysis.[5] A key linking identity is the product relation (1 - x^2) U_{n-1}(x) = x T_n(x) - T_{n+1}(x), which expresses the second-kind polynomial in terms of consecutive first-kind polynomials and underscores their shared oscillatory behavior on [-1, 1]. This formula can be derived from the trigonometric definitions T_n(x) = \cos(n \theta) and U_n(x) = \frac{\sin((n+1) \theta)}{\sin \theta} where x = \cos \theta, by manipulating sine and cosine multiple-angle expressions. The derivative relation T_n'(x) = n U_{n-1}(x) provides another direct connection, showing that the derivative of a first-kind polynomial is a scalar multiple of a second-kind polynomial of one lower degree; this is particularly useful in spectral methods and quadrature rules, as it preserves orthogonality properties under differentiation. A reciprocal form is U_n(x) = \frac{1}{n+1} T_{n+1}'(x), which inverts the derivative relation.Differences in degree and leading coefficients
The Chebyshev polynomials of the first kind T_n(x) and second kind U_n(x) are both polynomials of exact degree n for each nonnegative integer n. [Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.] For n \geq 1, the leading coefficient of T_n(x) is $2^{n-1}, whereas that of U_n(x) is $2^n. [Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.] These differences in leading coefficients imply that the monic version of U_n(x) is simply U_n(x)/2^n, while the monic T_n(x) requires division by $2^{n-1} for n \geq 1. [Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.] A key distinction in normalization arises from their behaviors on the interval [-1,1]. The polynomials T_n(x) are bounded such that |T_n(x)| \leq 1 for all x \in [-1,1], equioscillating between -1 and $1. [Rivlin, T.J. (1974). *The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory*. John Wiley & Sons.] In contrast, U_n(x)grows linearly withnnear the endpoints, attaining valuesU_n(1) = n+1andU_n(-1) = (-1)^n (n+1). [Rivlin, T.J. (1974). *The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory*. John Wiley & Sons.] For T_n(x), the endpoint values are T_n(1) = 1andT_n(-1) = (-1)^n. [Rivlin, T.J. (1974). *The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory*. John Wiley & Sons.] These endpoint behaviors are illustrated in the following table for small values of n$:| n | T_n(1) | T_n(-1) | U_n(1) | U_n(-1) |
|---|---|---|---|---|
| 0 | 1 | 1 | 1 | 1 |
| 1 | 1 | -1 | 2 | -2 |
| 2 | 1 | 1 | 3 | 3 |
| 3 | 1 | -1 | 4 | -4 |
| 4 | 1 | 1 | 5 | 5 |
Fundamental properties
Symmetry and parity
Chebyshev polynomials of the first kind T_n(x) exhibit definite parity under the reflection x \to -x, satisfying T_n(-x) = (-1)^n T_n(x). Thus, for even n, T_n(x) is an even function, while for odd n, it is an odd function. The same parity relation holds for the Chebyshev polynomials of the second kind, U_n(-x) = (-1)^n U_n(x), making U_n(x) even when n is even and odd when n is odd.[14][15] This reflection symmetry implies that polynomials of even degree T_{2n}(x) and U_{2n}(x) are even functions, whereas those of odd degree T_{2n+1}(x) and U_{2n+1}(x) are odd functions. Reduction formulas exploit this structure to express higher-degree polynomials in terms of lower-degree ones. For instance, the even-degree case yields T_{2n}(x) = 2 T_n^2(x) - 1, derived from the double-angle identity \cos(2\theta) = 2\cos^2(\theta) - 1 with x = \cos \theta. A similar relation holds for U_n, facilitating computations and decompositions into even and odd components.[15][16] The parity properties have practical implications for integration over symmetric intervals such as [-1, 1]. For odd n, since T_n(x) and U_n(x) are odd functions, their integrals over this interval vanish: \int_{-1}^{1} T_n(x) \, dx = 0 and \int_{-1}^{1} U_n(x) \, dx = 0. This simplifies evaluations involving even weight functions or symmetric integrands, as the odd parts contribute nothing.[17] Graphically, these polynomials display symmetric oscillations around the x-axis within [-1, 1], with even-degree instances mirroring across the y-axis and odd-degree ones antisymmetric, underscoring their utility in approximation theory for symmetric domains.[18]Roots and extrema
The Chebyshev polynomials of the first kind, T_n(x), possess n distinct real roots within the open interval (-1, 1), explicitly given by x_k = \cos\left( \frac{(2k-1)\pi}{2n} \right), \quad k = 1, 2, \dots, n. These roots stem from the trigonometric representation T_n(\cos \theta) = \cos(n\theta), where the zeros correspond to odd multiples of \pi/(2n) in the argument \theta. In addition to these roots, T_n(x) exhibits n-1 interior extrema on (-1, 1), with the full set of n+1 extremal points (including endpoints) located at x_k = \cos\left( \frac{k\pi}{n} \right), \quad k = 0, 1, \dots, n. At these points, T_n(x) alternates between maximum and minimum values of +1 and -1, demonstrating equioscillation across the interval [-1, 1].[3] The Chebyshev polynomials of the second kind, U_n(x), similarly have n distinct real roots in (-1, 1), located at x_k = \cos\left( \frac{k\pi}{n+1} \right), \quad k = 1, 2, \dots, n. These polynomials feature n extrema on (-1, 1). A notable interlacing property holds between the two kinds: the n-1 roots of U_{n-1}(x) strictly interlace the n roots of T_n(x) within (-1, 1), meaning each root of U_{n-1} lies between consecutive roots of T_n. This follows from the three-term recurrence relations satisfied by both families.Orthogonality relations
The Chebyshev polynomials of the first kind T_n(x) satisfy the orthogonality relation \int_{-1}^{1} \frac{T_m(x) T_n(x)}{\sqrt{1 - x^2}} \, dx = \begin{cases} 0 & m \neq n, \\ \pi & m = n = 0, \\ \pi/2 & m = n > 0, \end{cases} with respect to the weight function w(x) = (1 - x^2)^{-1/2} on the interval [-1, 1].[12] The corresponding squared L^2-norms are \|T_0\|^2 = \pi and \|T_n\|^2 = \pi/2 for n > 0.[12] Similarly, the Chebyshev polynomials of the second kind U_n(x) are orthogonal with respect to the weight function w(x) = \sqrt{1 - x^2}, satisfying \int_{-1}^{1} U_m(x) U_n(x) \sqrt{1 - x^2} \, dx = \begin{cases} 0 & m \neq n, \\ \pi/2 & m = n. \end{cases} The squared norms are \|U_n\|^2 = \pi/2 for all n \geq 0.[12] These relations can be proved using the trigonometric substitution x = \cos \theta, where \theta \in [0, \pi]. For T_n(x), this yields T_n(\cos \theta) = \cos(n\theta) and dx / \sqrt{1 - x^2} = -d\theta, transforming the integral into \int_0^\pi \cos(m\theta) \cos(n\theta) \, d\theta, which evaluates to the stated values via standard trigonometric orthogonality.[12] For U_n(x), U_n(\cos \theta) = \sin((n+1)\theta) / \sin \theta and \sqrt{1 - x^2} \, dx = \sin \theta \, d\theta, leading to \int_0^\pi \sin((m+1)\theta) \sin((n+1)\theta) \, d\theta = (\pi/2) \delta_{mn}.[12] Discrete variants of these orthogonality relations exist, where the polynomials are orthogonal with respect to summation over Chebyshev-Gauss or Chebyshev-Lobatto points, providing the foundation for efficient numerical quadrature rules such as Gauss-Chebyshev quadrature.[19]Calculus and algebraic operations
Differentiation rules
The differentiation of Chebyshev polynomials follows directly from their trigonometric definitions. For the polynomials of the first kind, let x = \cos \theta, so T_n(x) = \cos (n \theta). Differentiating with respect to x gives T_n'(x) = n \frac{\sin (n \theta )}{\sin \theta}, where the chain rule has been applied since \frac{d\theta}{dx} = -\frac{1}{\sin \theta}.[20] The expression \frac{\sin (n \theta )}{\sin \theta} is precisely the definition of the Chebyshev polynomial of the second kind U_{n-1}(x), yielding the fundamental relation T_n'(x) = n U_{n-1}(x). For the polynomials of the second kind, defined by U_n(x) = \frac{\sin ((n+1) \theta )}{\sin \theta}, differentiation again uses the chain rule and trigonometric identities, but the resulting expression is more conveniently handled via the recurrence relation satisfied by U_n: U_n(x) = 2x U_{n-1}(x) - U_{n-2}(x), with U_0(x) = 1 and U_1(x) = 2x. Differentiating this recurrence term by term produces a recurrence for the derivatives: U_n'(x) = 2 U_{n-1}(x) + 2x U_{n-1}'(x) - U_{n-2}'(x), with base cases U_0'(x) = 0 and U_1'(x) = 2. This allows computation of U_n'(x) iteratively for any n. Higher-order derivatives of T_n(x) can be obtained by repeated application of the basic rule T_n'(x) = n U_{n-1}(x), combined with the recurrences for U_m. The closed-form expression for the k-th derivative, with k \leq n, is T_n^{(k)}(x) = 2^{k-1} \frac{n!}{(n-k)!} U_{n-k}(x). This follows from induction on k, matching leading coefficients at each step (the leading coefficient of T_n is $2^{n-1} for n \geq 1, and that of U_m is $2^m), and is verified explicitly for small n and k. These rules facilitate integration by parts in proofs of the orthogonality relations for both kinds of Chebyshev polynomials over [-1, 1].Integration formulas
The indefinite integral of the Chebyshev polynomial of the first kind T_n(x) for n \geq 1 is \int T_n(x) \, dx = \frac{1}{2} \left[ x T_n(x) + \frac{1}{n} T_{n+1}(x) \right] + C. For n = 0, where T_0(x) = 1, the integral simplifies to \int T_0(x) \, dx = x + C.[21] Similarly, the indefinite integral of the Chebyshev polynomial of the second kind U_n(x) is \int U_n(x) \, dx = \frac{1}{2} \left[ x U_n(x) - \frac{1}{n+1} U_{n+1}(x) \right] + C. These expressions follow from the recurrence relations and trigonometric representations of the polynomials. A key definite integral involving T_n(x) is \int_{-1}^{1} \frac{T_n(x)}{\sqrt{1 - x^2}} \, dx = \begin{cases} \pi & \text{if } n = 0, \\ 0 & \text{if } n > 0. \end{cases} This result arises from the substitution x = \cos \theta, which transforms the integral into \int_0^\pi \cos(n \theta) \, d\theta, evaluating to zero for positive integers n due to symmetry, and \pi for n = 0. The substitution also connects the integral to the inverse sine function, as \int \frac{1}{\sqrt{1 - x^2}} \, dx = \arcsin x + C. For products of Chebyshev polynomials, reduction formulas can be derived using integration by parts on the weighted inner product \int_{-1}^1 T_m(x) T_n(x) (1 - x^2)^{-1/2} \, dx. Specifically, leveraging the derivative relation T_n'(x) = n U_{n-1}(x) and the orthogonality weight, integration by parts yields recurrences that reduce the degrees m and n, facilitating computation of non-orthogonal cases.Product identities
The product of two Chebyshev polynomials of the first kind admits a simple expression in terms of a linear combination of other such polynomials: T_m(x) T_n(x) = \frac{1}{2} \left[ T_{m+n}(x) + T_{|m-n|}(x) \right], valid for nonnegative integers m and n, where T_0(x) = 1. This identity follows directly from the trigonometric representation T_k(\cos \theta) = \cos(k \theta) and the product-to-sum formula for cosines. As a consequence, the addition formula for the first kind is obtained by rearranging: T_{m+n}(x) = 2 T_m(x) T_n(x) - T_{|m-n|}(x). For example, taking m = n = 2, T_2(x) T_2(x) = \frac{1}{2} \left[ T_4(x) + T_0(x) \right] = 4x^4 - 4x^2 + 1, which matches the direct expansion of (2x^2 - 1)^2. This product identity facilitates the multiplication of functions via their Chebyshev series expansions, as the product corresponds to a discrete convolution that can be efficiently computed. For Chebyshev polynomials of the second kind, the corresponding addition formula is U_{m+n}(x) = U_m(x) U_n(x) - U_{m-1}(x) U_{n-1}(x), assuming m, n \geq 1, with U_{-1}(x) = 0. This relation arises from the trigonometric definition U_k(\cos \theta) = \sin((k+1) \theta) / \sin \theta and the angle addition formula for sines. A direct product-to-sum identity for U_m U_n is more involved but can be derived similarly using sine product formulas, often resulting in expressions involving both U_k and terms adjusted by powers of (1 - x^2). These identities underpin applications in spectral methods and orthogonal expansions where products appear.[22]Composition properties
Chebyshev polynomials of the first kind satisfy a simple and elegant composition rule: for nonnegative integers m and n, T_m(T_n(x)) = T_{mn}(x). This identity holds because both sides equal \cos(m n \arccos x) under the standard trigonometric definition T_k(x) = \cos(k \arccos x). The operation is commutative, so T_n(T_m(x)) = T_{mn}(x) as well. As a direct consequence of this substitution property, the polynomial T_n(x) divides T_{kn}(x) in the ring of polynomials with rational coefficients for any positive integer k, since T_{kn}(x) = T_k(T_n(x)) expresses the higher-degree polynomial as a polynomial in T_n(x). For Chebyshev polynomials of the second kind, the composition U_n(T_m(x)) does not yield a single U_k(x), but a related product identity provides a useful functional relation: for nonnegative integers m and n, U_{mn-1}(x) = U_{m-1}(T_n(x)) \, U_{n-1}(x). This follows from the trigonometric forms U_k(\cos \theta) = \sin((k+1)\theta)/\sin \theta and \sin((mn)\theta) = \sin(m (n \theta)), combined with angle addition formulas. Chebyshev polynomials also connect to cyclotomic polynomials through their roots and algebraic structure; specifically, the discriminants and resultants of T_n(x) can be expressed using cyclotomic polynomials, reflecting the relation between the roots \cos((2j-1)\pi/(2n)) and roots of unity via the substitution x = (z + z^{-1})/2.[23]Approximation and minimax aspects
Minimal infinity norm
Among all monic polynomials of degree n on the interval [-1, 1], the scaled Chebyshev polynomial of the first kind \hat{T}_n(x) = 2^{1-n} T_n(x) achieves the minimal infinity norm, with \|\hat{T}_n\|_\infty = 2^{1-n}.[24] This property establishes the Chebyshev polynomials as the unique minimizers in the uniform norm for this class of polynomials.[25] The minimal norm arises from the equioscillation of \hat{T}_n(x), which attains the values \pm 2^{1-n} at exactly n+1 points in [-1, 1], alternating in sign.[26] These points correspond to the extrema of T_n(x), located at x_k = \cos\left( \frac{k \pi}{n} \right) for k = 0, 1, \dots, n.[3] A proof sketch proceeds by contradiction using the alternation theorem: suppose there exists another monic polynomial p(x) of degree n with \|p\|_\infty < 2^{1-n}. Then q(x) = \hat{T}_n(x) - p(x) would satisfy |q(x)| < 2^{1-n} at the n+1 equioscillation points, but since q(x) has leading coefficient zero and changes sign at least n times, it must have at least n roots, implying q \equiv 0 and contradicting the strict inequality.[26] This uniqueness holds for the continuous case on compact intervals.[27] In comparison, other orthogonal polynomial families, such as the Legendre polynomials, yield monic versions with strictly larger infinity norms on [-1, 1]; for instance, the monic Legendre polynomial of degree 2 has norm $2/3 > 1/2.[28] The leading coefficient of the standard Legendre polynomial P_n(x) is \frac{(2n)!}{2^n (n!)^2}, so the monic form \tilde{P}_n(x) = P_n(x) / \frac{(2n)!}{2^n (n!)^2} exceeds the Chebyshev bound asymptotically by a factor involving \sqrt{n}.[25] This scaling of Chebyshev polynomials is fundamental in minimax approximation, where \hat{T}_n(x) provides the tightest bound on the deviation from zero among monic polynomials, enabling optimal error estimates in polynomial interpolation and expansion.Equioscillation theorem
The equioscillation theorem, also known as Chebyshev's alternation theorem, characterizes the unique best uniform approximation to a continuous function by polynomials of lower degree. For a continuous function f on a compact interval [a, b], let p be the polynomial of degree at most n that minimizes the uniform norm \|f - p\|_\infty. Then the error function f - p attains its maximum absolute value \|f - p\|_\infty at exactly n+2 points x_0 < x_1 < \cdots < x_{n+1} in [a, b], with the signs of f(x_i) - p(x_i) alternating between consecutive points.[29] This property ensures uniqueness and provides a practical criterion for verifying optimal approximations. The Chebyshev polynomial T_n(x) of the first kind exemplifies this theorem in the context of approximating the zero function by polynomials of degree less than n. The monic polynomial of degree n with the minimal uniform norm on [-1, 1] is \hat{T}_n(x) = 2^{-(n-1)} T_n(x), and its values equioscillate—reaching \pm 2^{-(n-1)} with alternating signs—precisely at the n+1 extrema of T_n(x) in [-1, 1]. This minimal deviation follows directly from the equioscillation property, as any other monic polynomial of degree n would have a larger maximum norm by the theorem's characterization.[29] The theorem extends to arbitrary compact intervals via an affine transformation that maps [a, b] to [-1, 1]. Specifically, define the linear map x = \frac{b-a}{2} t + \frac{a+b}{2} for t \in [-1, 1]; then the best approximation on [a, b] corresponds to the best approximation on [-1, 1] under this change of variables, preserving the equioscillation structure after rescaling. Pafnuty Chebyshev first developed this theorem in his 1854 study of mechanical linkages, where minimizing deviations in approximate representations required optimal polynomial approximations; he applied the equioscillation idea to ensure the smallest possible maximum error in such systems.[30] A concrete illustration arises in approximating the monomial x^{n+1} on [-1, 1] by polynomials of degree less than or equal to n. The error in the best uniform approximation is proportional to T_{n+1}(x), which equioscillates at n+2 points: the points \cos\left(\frac{k\pi}{n+1}\right) for k = 0, 1, \dots, n+1, attaining \pm 1 alternately. This confirms the optimality via the theorem, with the leading coefficient determining the exact constant of proportionality.[29]Special representations
As determinants
Chebyshev polynomials of the first kind T_n(x) can be represented as the determinant of an n \times n tridiagonal matrix derived from their three-term recurrence relation T_n(x) = 2x T_{n-1}(x) - T_{n-2}(x), with initial conditions T_0(x) = 1 and T_1(x) = x. The matrix has x in the (1,1) entry, $2x in all other diagonal entries, and -1 in the off-diagonal entries (sub- and super-diagonals). This structure accommodates the initial condition at degree 1 while enforcing the recurrence for higher degrees through the linear algebra of the tridiagonal form.[31] For example, when n=3, the matrix is \begin{pmatrix} x & -1 & 0 \\ -1 & 2x & -1 \\ 0 & -1 & 2x \end{pmatrix}, and its determinant is x(4x^2 - 1) - 2x = 4x^3 - 3x = T_3(x). This explicit computation illustrates how the modified (1,1) entry shifts the scaling to match the leading coefficient $2^{n-1} of T_n(x).[31] Chebyshev polynomials of the second kind U_n(x) follow a similar tridiagonal structure from their recurrence U_n(x) = 2x U_{n-1}(x) - U_{n-2}(x), with U_0(x) = 1 and U_1(x) = 2x. These determinant representations stem from the broader theory of orthogonal polynomials, where the polynomials are the characteristic polynomials of Jacobi (tridiagonal) matrices encoding the recurrence coefficients. For Chebyshev polynomials, the zero diagonal shifts (beyond the linear term) reflect their symmetry with respect to the origin. Additionally, the recurrences link to continued fraction expansions of the orthogonalizing measures—such as (1 - x^2)^{-1/2} \, dx for T_n(x) and \sqrt{1 - x^2} \, dx for U_n(x)—where the convergents' denominators are these determinants, providing a bridge to moment problems and spectral theory.[32][33]Links to other polynomial families
Chebyshev polynomials of the first kind T_n(x) arise as a special case of the Jacobi polynomials P_n^{(\alpha, \beta)}(x) when the parameters are set to \alpha = \beta = -1/2. Specifically, P_n^{(-1/2, -1/2)}(x) = \frac{(2n)!}{2^{2n} (n!)^2} T_n(x), establishing a direct proportional relationship up to a scaling factor that depends on n.[34] Similarly, the Chebyshev polynomials of the second kind U_n(x) correspond to Jacobi polynomials with \alpha = \beta = 1/2, given by U_n(x) = (n+1) \frac{2^{2n} (n!)^2}{(2n+1)!} P_n^{(1/2, 1/2)}(x). [4] The Legendre polynomials, another classical family, are likewise special cases of Jacobi polynomials with parameters \alpha = \beta = 0, where P_n(x) = P_n^{(0,0)}(x). Thus, Chebyshev polynomials connect to Legendre polynomials through the broader Jacobi framework, with the relation emerging as a limit of the Jacobi parameters approaching zero from the Chebyshev-specific values \alpha, \beta \to 0. This positioning highlights their shared role within the hierarchy of classical orthogonal polynomials.[12] Both Chebyshev and Jacobi polynomials admit representations in terms of the Gauss hypergeometric function {}_2F_1. The explicit form for the Chebyshev polynomial of the first kind is T_n(x) = {}_2F_1\left( -n, n; \frac{1}{2}; \frac{1 - x}{2} \right), while the general Jacobi polynomial is P_n^{(\alpha, \beta)}(x) = \frac{(\alpha + 1)_n}{n!} {}_2F_1\left( -n, n + \alpha + \beta + 1; \alpha + 1; \frac{1 - x}{2} \right). Substituting \alpha = \beta = -1/2 into the Jacobi expression recovers the Chebyshev form, underscoring their unified hypergeometric structure.[3] In the discrete domain, Hahn polynomials serve as a generalization, reducing to discrete analogs of Chebyshev polynomials under specific parameter choices. The discrete Chebyshev polynomials of the first kind t_n(x, N) are obtained as Hahn polynomials Q_n(x; -1, -1, N), and those of the second kind as Q_n(x; 0, 0, N), where N is the discrete support size. This connection embeds Chebyshev polynomials within the Askey scheme's discrete branch.[35] At the apex of the Askey scheme, Wilson polynomials encompass the continuous classical families through limiting processes. Taking the limit q \to 1 in the q-Wilson polynomials yields the Wilson polynomials, from which further limits—such as reducing parameters to recover Jacobi polynomials—include the Chebyshev polynomials as special cases. This hierarchical limit relation positions Chebyshev polynomials as foundational within the full spectrum of hypergeometric orthogonal polynomials.[36]Examples and computations
Low-degree polynomials of the first kind
The Chebyshev polynomials of the first kind, denoted T_n(x), are defined for low degrees explicitly as follows.[3] For degree 0:T_0(x) = 1 [3][37] For degree 1:
T_1(x) = x [3][37] For degree 2:
T_2(x) = 2x^2 - 1 [3][37] For degree 3:
T_3(x) = 4x^3 - 3x [3][38] For degree 4:
T_4(x) = 8x^4 - 8x^2 + 1 [3][38] For degree 5:
T_5(x) = 16x^5 - 20x^3 + 5x [3][38] On the interval [-1, 1], these polynomials satisfy |T_n(x)| \leq 1, exhibiting n half-oscillations between -1 and 1, with the amplitude and frequency of oscillations increasing with n.[3][39]
Low-degree polynomials of the second kind
The Chebyshev polynomials of the second kind U_n(x) are defined for low degrees by the following explicit expressions, derived from their generating function or recurrence relation.[4]| Degree n | Polynomial U_n(x) |
|---|---|
| 0 | $1 |
| 1 | $2x |
| 2 | $4x^2 - 1 |
| 3 | $8x^3 - 4x |
| 4 | $16x^4 - 12x^2 + 1 |
| 5 | $32x^5 - 32x^3 + 6x |