Fact-checked by Grok 2 weeks ago

Chebyshev polynomials

Chebyshev polynomials are a class of orthogonal polynomials introduced by the Russian mathematician Pafnuty Lvovich Chebyshev in his 1854 paper "Théorie des mécanismes connus sous le nom de parallélogrammes," where they arose in the context of studying mechanisms and approximation problems. Defined on the interval [-1, 1], they consist of two main families: the polynomials of the first kind, denoted T_n(x), given by T_n(x) = \cos(n \arccos x) for x \in [-1, 1], and the polynomials of the second kind, U_n(x), defined similarly via \sin((n+1) \arccos x) / \sin(\arccos x). These polynomials satisfy a three-term , T_{n+1}(x) = 2x T_n(x) - T_{n-1}(x) with T_0(x) = 1 and T_1(x) = x, and are orthogonal with respect to the weight function $1/\sqrt{1 - x^2} for T_n and \sqrt{1 - x^2} for U_n. A defining feature of Chebyshev polynomials is their minimax property: among all monic polynomials of degree n, T_n(x)/2^{n-1} has the smallest possible maximum on [-1, 1], equaling $1/2^{n-1}, which makes them ideal for uniform approximation and error minimization in numerical methods. Their and extrema are explicitly known—the of T_n(x) are at x_k = \cos((2k-1)\pi / (2n)) for k = 1, \dots, n, all real and distinct in (-1, 1)—facilitating applications in , , and . Chebyshev polynomials also appear in generating functions, such as \sum_{n=0}^\infty T_n(x) t^n = (1 - t x)/(1 - 2 t x + t^2), and have connections to trigonometric identities via de Moivre's theorem. In , Chebyshev polynomials underpin techniques like Chebyshev spectral methods for solving equations, where they enable efficient on non-uniform grids via the Chebyshev-Gauss quadrature, and in for due to their oscillatory behavior. Their role in approximation theory extends to modern computational tools, such as the Chebfun software system, which leverages them for numerical computations with functions represented as Chebyshev series. Chebyshev's foundational work on these polynomials not only advanced mechanism theory but also laid the groundwork for the St. Petersburg school of mathematics, influencing probability, , and .

Definitions

Recurrence relations

The Chebyshev polynomials of the first kind T_n(x) are defined by the three-term linear recurrence relation T_{n+1}(x) = 2x T_n(x) - T_{n-1}(x), \quad n \geq 1, with initial conditions T_0(x) = 1 and T_1(x) = x. Applying the recurrence yields, for instance, T_2(x) = 2x^2 - 1. The Chebyshev polynomials of the second kind U_n(x) satisfy an analogous recurrence U_{n+1}(x) = 2x U_n(x) - U_{n-1}(x), \quad n \geq 1, with initial conditions U_0(x) = 1 and U_1(x) = 2x. For n=2, this gives U_2(x) = 4x^2 - 1. These relations generate the polynomials iteratively: starting from the degree-zero and degree-one terms, each subsequent polynomial is obtained via a simple of the two predecessors, requiring only a fixed number of arithmetic operations per step. This approach is computationally efficient, achieving evaluation of T_n(x) or U_n(x) in O(n) time and exhibiting numerical stability when x \in [-1, 1].

Trigonometric identities

Chebyshev polynomials of the first kind, denoted T_n(x), are fundamentally defined through their trigonometric representation as T_n(x) = \cos(n \arccos x) for x \in [-1, 1]. This definition establishes a direct connection to multiple-angle formulas for the cosine function, originating from the work of in the on trigonometric series and approximation theory. For instance, the double-angle formula \cos(2\theta) = 2\cos^2(\theta) - 1 yields T_2(x) = 2x^2 - 1 upon substituting x = \cos \theta. Higher-degree polynomials follow similarly from iterative applications of multiple-angle identities, such as \cos(3\theta) = 4\cos^3(\theta) - 3\cos(\theta), giving T_3(x) = 4x^3 - 3x. The trigonometric form extends beyond the interval [-1, 1] via to the , where for |x| > 1, T_n(x) = \cosh(n \arccosh x). This hyperbolic representation maintains the polynomial's properties while allowing evaluation outside the real . Chebyshev polynomials of the second kind, denoted U_n(x), are similarly expressed trigonometrically as U_n(x) = \frac{\sin((n+1) \arccos x)}{\sin(\arccos x)} for x \in [-1, 1], excluding points where the denominator vanishes. This form arises from sine multiple-angle formulas, paralleling the cosine basis for the first kind. At the endpoints of the interval, these polynomials exhibit specific values: T_n(1) = 1 and T_n(-1) = (-1)^n for all n; likewise, U_n(1) = n+1 and U_n(-1) = (-1)^n (n+1). These boundary behaviors underscore their oscillatory nature within [-1, 1], mimicking cosine and sine waves scaled to the interval.

Generating functions

The generating functions for Chebyshev polynomials offer a formal power series representation that encapsulates the entire sequence in a closed rational form, aiding in coefficient analysis and links to broader analytic structures. For the first kind, the generating function is \sum_{n=0}^{\infty} T_n(x) t^n = \frac{1 - t x}{1 - 2 t x + t^2}, which converges for |t| < 1 when |x| \leq 1, and for |t| < \frac{1}{|x| + \sqrt{x^2 - 1}} when |x| > 1 (assuming x real and greater than 1; analogous for x < -1). For the second kind, it is \sum_{n=0}^{\infty} U_n(x) t^n = \frac{1}{1 - 2 t x + t^2}, with the same convergence conditions. These functions derive from the trigonometric characterizations of the polynomials. With x = \cos \theta and \theta \in [0, \pi], T_n(x) = \cos(n \theta), so the series \sum_{n=0}^{\infty} T_n(x) t^n = \sum_{n=0}^{\infty} t^n \cos(n \theta). This sums via the formula for the real part of a geometric series: \sum_{n=0}^{\infty} r^n e^{i n \theta} = 1 / (1 - r e^{i \theta}) for r = t and |t| < 1, yielding the rational expression after taking the real component and simplifying. A parallel derivation holds for U_n(x) = \sin((n+1) \theta) / \sin \theta, where the series becomes \sum_{n=0}^{\infty} t^n \sin((n+1) \theta) / \sin \theta; the numerator sums to \sin \theta / (1 - 2 t \cos \theta + t^2) via the imaginary part of the geometric series, and the overall form simplifies to the reciprocal of the denominator. The common denominator $1 - 2 t x + t^2 factors as (1 - t e^{i \theta})(1 - t e^{-i \theta}) under x = \cos \theta, explicitly connecting it to products of geometric series terms in the complex plane. Such generating functions allow extraction of individual coefficients through the Cauchy integral formula, for instance, T_n(x) = \frac{1}{2\pi i} \oint \frac{1 - t x}{(1 - 2 t x + t^2) t^{n+1}} \, dt over a contour encircling the origin within the radius of convergence. They further support series manipulations to derive identities and, briefly, underpin proofs of orthogonality by generating integral representations.

Explicit expressions

Closed-form formulas

Chebyshev polynomials of the first kind, T_n(x), admit an explicit summation formula as a polynomial of degree n: T_n(x) = \sum_{k=0}^{\lfloor n/2 \rfloor} (-1)^k \binom{n}{2k} (1 - x^2)^k x^{n - 2k}. This expression arises from the binomial expansion underlying the trigonometric identity T_n(\cos \theta) = \cos(n\theta). Similarly, Chebyshev polynomials of the second kind, U_n(x), have the explicit form U_n(x) = \sum_{k=0}^{\lfloor n/2 \rfloor} (-1)^k \binom{n - k}{k} (2x)^{n - 2k}, which is also a polynomial of degree n. Both T_n(x) and U_n(x) can be expressed using the hypergeometric function {}_2F_1. Specifically, T_n(x) = {}_2F_1\left(-n, n; \frac{1}{2}; \frac{1 - x}{2}\right), and U_n(x) = (n + 1) {}_2F_1\left(-n, n + 2; \frac{3}{2}; \frac{1 - x}{2}\right). These representations highlight their connections to classical special functions. The leading coefficient of T_n(x) is $2^{n-1} for n \geq 1 (with T_0(x) = 1), while for U_n(x) it is $2^n.

Rodrigues-type formulas

The Rodrigues-type formulas provide differential representations for Chebyshev polynomials, analogous to Rodrigues' formula for other classical orthogonal polynomials. These expressions are particularly valuable in theoretical contexts, such as deriving properties through repeated differentiation and integration by parts, and they arise naturally from the Sturm-Liouville theory associated with the Chebyshev differential equation. In this framework, the polynomials are generated as solutions to a self-adjoint boundary value problem on the interval [-1, 1], where the weight function and the form of the operator lead to these explicit derivative-based definitions. For the Chebyshev polynomials of the first kind T_n(x), the Rodrigues-type formula is T_n(x) = (-1)^n \sqrt{1 - x^2} \frac{1}{(2n-1)!!} \frac{d^n}{dx^n} (1 - x^2)^{n - 1/2}, where (2n-1)!! = 1 \cdot 3 \cdot 5 \cdots (2n-1) denotes the double factorial, equivalent to \frac{(2n)!}{2^n n!}. This representation highlights the role of the weight function w(x) = (1 - x^2)^{-1/2} in the orthogonality integral, as the formula can be viewed as T_n(x) = c_n w(x)^{-1} \frac{d^n}{dx^n} \left[ w(x) (1 - x^2)^n \right] with an appropriate normalization constant c_n. For the Chebyshev polynomials of the second kind U_n(x), the corresponding formula is U_n(x) = (-1)^n \frac{2^n (n+1)!}{(2n+1)!} (1 - x^2)^{-1/2} \frac{d^n}{dx^n} (1 - x^2)^{n + 1/2}. Here, the inverse weight (1 - x^2)^{-1/2} appears explicitly, reflecting the orthogonality with respect to w(x) = \sqrt{1 - x^2}. This form facilitates generalizations and connections to other polynomial families, as it stems from the special case \alpha = \beta = 1/2 of via the relation U_n(x) = \frac{2^{2n}}{\binom{2n+1}{n}} P_n^{(1/2, 1/2)}(x). These formulas are instrumental in Sturm-Liouville theory, where the Chebyshev polynomials emerge as eigenfunctions of singular differential operators of the form \frac{d}{dx} \left[ p(x) \frac{d}{dx} \right] + q(x) + \lambda w(x), with p(x) = (1 - x^2)^{1/2} for the second kind and adjusted accordingly for the first kind; the Rodrigues representation allows direct construction of the eigenfunctions without solving the differential equation explicitly. Furthermore, by taking limits in the parameter space of the underlying (specifically, as the parameters \alpha, \beta \to 0), the Chebyshev Rodrigues formulas connect to those of , providing a pathway for asymptotic analysis and uniform approximations across classical families.

Relations between kinds

Connections via identities

Chebyshev polynomials of the first kind, T_n(x), and the second kind, U_n(x), are interconnected through several fundamental identities that highlight their unified role in approximation theory and orthogonal polynomial systems. Pafnuty Chebyshev originally developed these polynomials in the mid-19th century as part of his investigations into the best uniform approximation of functions by polynomials, particularly in the context of minimizing maximum deviations on the interval [-1, 1]. His work, such as the 1854 paper "Théorie des mécanismes connus sous le nom de parallélogrammes," demonstrated how these polynomials arise naturally from trigonometric substitutions and linkages in mechanisms, providing a basis for their mutual relations that facilitate efficient computational methods in numerical analysis. A key linking identity is the product relation (1 - x^2) U_{n-1}(x) = x T_n(x) - T_{n+1}(x), which expresses the second-kind polynomial in terms of consecutive first-kind polynomials and underscores their shared oscillatory behavior on [-1, 1]. This formula can be derived from the trigonometric definitions T_n(x) = \cos(n \theta) and U_n(x) = \frac{\sin((n+1) \theta)}{\sin \theta} where x = \cos \theta, by manipulating sine and cosine multiple-angle expressions. The derivative relation T_n'(x) = n U_{n-1}(x) provides another direct connection, showing that the derivative of a first-kind polynomial is a scalar multiple of a second-kind polynomial of one lower degree; this is particularly useful in spectral methods and quadrature rules, as it preserves orthogonality properties under differentiation. A reciprocal form is U_n(x) = \frac{1}{n+1} T_{n+1}'(x), which inverts the derivative relation.

Differences in degree and leading coefficients

The Chebyshev polynomials of the first kind T_n(x) and second kind U_n(x) are both polynomials of exact degree n for each nonnegative integer n. [Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.] For n \geq 1, the leading coefficient of T_n(x) is $2^{n-1}, whereas that of U_n(x) is $2^n. [Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.] These differences in leading coefficients imply that the monic version of U_n(x) is simply U_n(x)/2^n, while the monic T_n(x) requires division by $2^{n-1} for n \geq 1. [Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.] A key distinction in normalization arises from their behaviors on the interval [-1,1]. The polynomials T_n(x) are bounded such that |T_n(x)| \leq 1 for all x \in [-1,1], equioscillating between -1 and $1. [Rivlin, T.J. (1974). *The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory*. John Wiley & Sons.] In contrast, U_n(x)grows linearly withnnear the endpoints, attaining valuesU_n(1) = n+1andU_n(-1) = (-1)^n (n+1). [Rivlin, T.J. (1974). *The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory*. John Wiley & Sons.] For T_n(x), the endpoint values are T_n(1) = 1andT_n(-1) = (-1)^n. [Rivlin, T.J. (1974). *The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory*. John Wiley & Sons.] These endpoint behaviors are illustrated in the following table for small values of n$:
nT_n(1)T_n(-1)U_n(1)U_n(-1)
01111
11-12-2
21133
31-14-4
41155
[Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.] The structural differences also extend to their orthogonality properties, where T_n(x) are orthogonal with respect to the weight function $1/\sqrt{1-x^2} on [-1,1], while U_n(x) are orthogonal with respect to \sqrt{1-x^2} on the same interval. [Rivlin, T.J. (1974). The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory. John Wiley & Sons.]

Fundamental properties

Symmetry and parity

Chebyshev polynomials of the first kind T_n(x) exhibit definite parity under the reflection x \to -x, satisfying T_n(-x) = (-1)^n T_n(x). Thus, for even n, T_n(x) is an even function, while for odd n, it is an odd function. The same parity relation holds for the Chebyshev polynomials of the second kind, U_n(-x) = (-1)^n U_n(x), making U_n(x) even when n is even and odd when n is odd. This reflection symmetry implies that polynomials of even degree T_{2n}(x) and U_{2n}(x) are even functions, whereas those of odd degree T_{2n+1}(x) and U_{2n+1}(x) are odd functions. Reduction formulas exploit this structure to express higher-degree polynomials in terms of lower-degree ones. For instance, the even-degree case yields T_{2n}(x) = 2 T_n^2(x) - 1, derived from the double-angle identity \cos(2\theta) = 2\cos^2(\theta) - 1 with x = \cos \theta. A similar relation holds for U_n, facilitating computations and decompositions into even and odd components. The parity properties have practical implications for integration over symmetric intervals such as [-1, 1]. For odd n, since T_n(x) and U_n(x) are odd functions, their integrals over this interval vanish: \int_{-1}^{1} T_n(x) \, dx = 0 and \int_{-1}^{1} U_n(x) \, dx = 0. This simplifies evaluations involving even weight functions or symmetric integrands, as the odd parts contribute nothing. Graphically, these polynomials display symmetric oscillations around the x-axis within [-1, 1], with even-degree instances mirroring across the y-axis and odd-degree ones antisymmetric, underscoring their utility in approximation theory for symmetric domains.

Roots and extrema

The Chebyshev polynomials of the first kind, T_n(x), possess n distinct real roots within the open interval (-1, 1), explicitly given by x_k = \cos\left( \frac{(2k-1)\pi}{2n} \right), \quad k = 1, 2, \dots, n. These roots stem from the trigonometric representation T_n(\cos \theta) = \cos(n\theta), where the zeros correspond to odd multiples of \pi/(2n) in the argument \theta. In addition to these roots, T_n(x) exhibits n-1 interior extrema on (-1, 1), with the full set of n+1 extremal points (including endpoints) located at x_k = \cos\left( \frac{k\pi}{n} \right), \quad k = 0, 1, \dots, n. At these points, T_n(x) alternates between maximum and minimum values of +1 and -1, demonstrating equioscillation across the interval [-1, 1]. The Chebyshev polynomials of the second kind, U_n(x), similarly have n distinct real roots in (-1, 1), located at x_k = \cos\left( \frac{k\pi}{n+1} \right), \quad k = 1, 2, \dots, n. These polynomials feature n extrema on (-1, 1). A notable interlacing property holds between the two kinds: the n-1 roots of U_{n-1}(x) strictly interlace the n roots of T_n(x) within (-1, 1), meaning each root of U_{n-1} lies between consecutive roots of T_n. This follows from the three-term recurrence relations satisfied by both families.

Orthogonality relations

The Chebyshev polynomials of the first kind T_n(x) satisfy the orthogonality relation \int_{-1}^{1} \frac{T_m(x) T_n(x)}{\sqrt{1 - x^2}} \, dx = \begin{cases} 0 & m \neq n, \\ \pi & m = n = 0, \\ \pi/2 & m = n > 0, \end{cases} with respect to the weight function w(x) = (1 - x^2)^{-1/2} on the interval [-1, 1]. The corresponding squared L^2-norms are \|T_0\|^2 = \pi and \|T_n\|^2 = \pi/2 for n > 0. Similarly, the Chebyshev polynomials of the second kind U_n(x) are orthogonal with respect to the weight function w(x) = \sqrt{1 - x^2}, satisfying \int_{-1}^{1} U_m(x) U_n(x) \sqrt{1 - x^2} \, dx = \begin{cases} 0 & m \neq n, \\ \pi/2 & m = n. \end{cases} The squared norms are \|U_n\|^2 = \pi/2 for all n \geq 0. These relations can be proved using the x = \cos \theta, where \theta \in [0, \pi]. For T_n(x), this yields T_n(\cos \theta) = \cos(n\theta) and dx / \sqrt{1 - x^2} = -d\theta, transforming the integral into \int_0^\pi \cos(m\theta) \cos(n\theta) \, d\theta, which evaluates to the stated values via standard trigonometric . For U_n(x), U_n(\cos \theta) = \sin((n+1)\theta) / \sin \theta and \sqrt{1 - x^2} \, dx = \sin \theta \, d\theta, leading to \int_0^\pi \sin((m+1)\theta) \sin((n+1)\theta) \, d\theta = (\pi/2) \delta_{mn}. Discrete variants of these relations exist, where the polynomials are orthogonal with respect to over Chebyshev-Gauss or Chebyshev-Lobatto points, providing the foundation for efficient numerical quadrature rules such as Gauss-Chebyshev quadrature.

Calculus and algebraic operations

Differentiation rules

The differentiation of Chebyshev polynomials follows directly from their trigonometric definitions. For the polynomials of the first kind, let x = \cos \theta, so T_n(x) = \cos (n \theta). Differentiating with respect to x gives T_n'(x) = n \frac{\sin (n \theta )}{\sin \theta}, where the chain rule has been applied since \frac{d\theta}{dx} = -\frac{1}{\sin \theta}. The expression \frac{\sin (n \theta )}{\sin \theta} is precisely the definition of the Chebyshev polynomial of the second kind U_{n-1}(x), yielding the fundamental relation T_n'(x) = n U_{n-1}(x). For the polynomials of the second kind, defined by U_n(x) = \frac{\sin ((n+1) \theta )}{\sin \theta}, differentiation again uses the chain rule and trigonometric identities, but the resulting expression is more conveniently handled via the recurrence relation satisfied by U_n: U_n(x) = 2x U_{n-1}(x) - U_{n-2}(x), with U_0(x) = 1 and U_1(x) = 2x. Differentiating this recurrence term by term produces a recurrence for the derivatives: U_n'(x) = 2 U_{n-1}(x) + 2x U_{n-1}'(x) - U_{n-2}'(x), with base cases U_0'(x) = 0 and U_1'(x) = 2. This allows computation of U_n'(x) iteratively for any n. Higher-order derivatives of T_n(x) can be obtained by repeated application of the basic rule T_n'(x) = n U_{n-1}(x), combined with the recurrences for U_m. The closed-form expression for the k-th derivative, with k \leq n, is T_n^{(k)}(x) = 2^{k-1} \frac{n!}{(n-k)!} U_{n-k}(x). This follows from induction on k, matching leading coefficients at each step (the leading coefficient of T_n is $2^{n-1} for n \geq 1, and that of U_m is $2^m), and is verified explicitly for small n and k. These rules facilitate integration by parts in proofs of the orthogonality relations for both kinds of Chebyshev polynomials over [-1, 1].

Integration formulas

The indefinite integral of the Chebyshev polynomial of the first kind T_n(x) for n \geq 1 is \int T_n(x) \, dx = \frac{1}{2} \left[ x T_n(x) + \frac{1}{n} T_{n+1}(x) \right] + C. For n = 0, where T_0(x) = 1, the integral simplifies to \int T_0(x) \, dx = x + C. Similarly, the indefinite integral of the Chebyshev polynomial of the second kind U_n(x) is \int U_n(x) \, dx = \frac{1}{2} \left[ x U_n(x) - \frac{1}{n+1} U_{n+1}(x) \right] + C. These expressions follow from the recurrence relations and trigonometric representations of the polynomials. A key definite integral involving T_n(x) is \int_{-1}^{1} \frac{T_n(x)}{\sqrt{1 - x^2}} \, dx = \begin{cases} \pi & \text{if } n = 0, \\ 0 & \text{if } n > 0. \end{cases} This result arises from the substitution x = \cos \theta, which transforms the integral into \int_0^\pi \cos(n \theta) \, d\theta, evaluating to zero for positive integers n due to symmetry, and \pi for n = 0. The substitution also connects the integral to the inverse sine function, as \int \frac{1}{\sqrt{1 - x^2}} \, dx = \arcsin x + C. For products of Chebyshev polynomials, reduction formulas can be derived using on the weighted inner product \int_{-1}^1 T_m(x) T_n(x) (1 - x^2)^{-1/2} \, dx. Specifically, leveraging the relation T_n'(x) = n U_{n-1}(x) and the weight, yields recurrences that reduce the degrees m and n, facilitating computation of non-orthogonal cases.

Product identities

The product of two Chebyshev polynomials of the first kind admits a simple expression in terms of a of other such polynomials: T_m(x) T_n(x) = \frac{1}{2} \left[ T_{m+n}(x) + T_{|m-n|}(x) \right], valid for nonnegative integers m and n, where T_0(x) = 1. This identity follows directly from the trigonometric representation T_k(\cos \theta) = \cos(k \theta) and the product-to-sum formula for cosines. As a consequence, the addition formula for the first kind is obtained by rearranging: T_{m+n}(x) = 2 T_m(x) T_n(x) - T_{|m-n|}(x). For example, taking m = n = 2, T_2(x) T_2(x) = \frac{1}{2} \left[ T_4(x) + T_0(x) \right] = 4x^4 - 4x^2 + 1, which matches the direct expansion of (2x^2 - 1)^2. This product facilitates the of functions via their Chebyshev series , as the product corresponds to a discrete convolution that can be efficiently computed. For Chebyshev polynomials of the second kind, the corresponding addition formula is U_{m+n}(x) = U_m(x) U_n(x) - U_{m-1}(x) U_{n-1}(x), assuming m, n \geq 1, with U_{-1}(x) = 0. This relation arises from the trigonometric definition U_k(\cos \theta) = \sin((k+1) \theta) / \sin \theta and the angle addition formula for . A direct product-to-sum for U_m U_n is more involved but can be derived similarly using sine product formulas, often resulting in expressions involving both U_k and terms adjusted by powers of (1 - x^2). These underpin applications in spectral methods and orthogonal where products appear.

Composition properties

Chebyshev polynomials of the first kind satisfy a simple and elegant composition rule: for nonnegative integers m and n, T_m(T_n(x)) = T_{mn}(x). This identity holds because both sides equal \cos(m n \arccos x) under the standard trigonometric T_k(x) = \cos(k \arccos x). The operation is commutative, so T_n(T_m(x)) = T_{mn}(x) as well. As a direct consequence of this substitution property, the T_n(x) divides T_{kn}(x) in the of polynomials with rational coefficients for any positive k, since T_{kn}(x) = T_k(T_n(x)) expresses the higher-degree as a polynomial in T_n(x). For Chebyshev polynomials of the second kind, the U_n(T_m(x)) does not yield a single U_k(x), but a related product identity provides a useful functional relation: for nonnegative integers m and n, U_{mn-1}(x) = U_{m-1}(T_n(x)) \, U_{n-1}(x). This follows from the trigonometric forms U_k(\cos \theta) = \sin((k+1)\theta)/\sin \theta and \sin((mn)\theta) = \sin(m (n \theta)), combined with angle addition formulas. Chebyshev polynomials also connect to cyclotomic polynomials through their roots and algebraic structure; specifically, the discriminants and resultants of T_n(x) can be expressed using cyclotomic polynomials, reflecting the relation between the roots \cos((2j-1)\pi/(2n)) and roots of unity via the x = (z + z^{-1})/2.

Approximation and minimax aspects

Minimal infinity norm

Among all monic polynomials of degree n on the interval [-1, 1], the scaled Chebyshev polynomial of the first kind \hat{T}_n(x) = 2^{1-n} T_n(x) achieves the minimal , with \|\hat{T}_n\|_\infty = 2^{1-n}. This property establishes the Chebyshev polynomials as the unique minimizers in the for this class of polynomials. The minimal norm arises from the equioscillation of \hat{T}_n(x), which attains the values \pm 2^{1-n} at exactly n+1 points in [-1, 1], alternating in sign. These points correspond to the extrema of T_n(x), located at x_k = \cos\left( \frac{k \pi}{n} \right) for k = 0, 1, \dots, n. A proof sketch proceeds by contradiction using the alternation theorem: suppose there exists another monic polynomial p(x) of degree n with \|p\|_\infty < 2^{1-n}. Then q(x) = \hat{T}_n(x) - p(x) would satisfy |q(x)| < 2^{1-n} at the n+1 equioscillation points, but since q(x) has leading coefficient zero and changes sign at least n times, it must have at least n roots, implying q \equiv 0 and contradicting the strict inequality. This uniqueness holds for the continuous case on compact intervals. In comparison, other orthogonal polynomial families, such as the Legendre polynomials, yield monic versions with strictly larger infinity norms on [-1, 1]; for instance, the monic Legendre polynomial of degree 2 has norm $2/3 > 1/2. The leading coefficient of the standard Legendre polynomial P_n(x) is \frac{(2n)!}{2^n (n!)^2}, so the monic form \tilde{P}_n(x) = P_n(x) / \frac{(2n)!}{2^n (n!)^2} exceeds the Chebyshev bound asymptotically by a factor involving \sqrt{n}. This scaling of Chebyshev polynomials is fundamental in minimax approximation, where \hat{T}_n(x) provides the tightest bound on the deviation from zero among monic polynomials, enabling optimal error estimates in and expansion.

Equioscillation theorem

The equioscillation theorem, also known as Chebyshev's alternation theorem, characterizes the unique best uniform approximation to a by polynomials of lower degree. For a f on a compact [a, b], let p be the polynomial of degree at most n that minimizes the \|f - p\|_\infty. Then the f - p attains its maximum \|f - p\|_\infty at exactly n+2 points x_0 < x_1 < \cdots < x_{n+1} in [a, b], with the signs of f(x_i) - p(x_i) alternating between consecutive points. This property ensures uniqueness and provides a practical criterion for verifying optimal approximations. The Chebyshev polynomial T_n(x) of the first kind exemplifies this theorem in the context of approximating the zero function by polynomials of degree less than n. The monic polynomial of degree n with the minimal uniform norm on [-1, 1] is \hat{T}_n(x) = 2^{-(n-1)} T_n(x), and its values equioscillate—reaching \pm 2^{-(n-1)} with alternating signs—precisely at the n+1 extrema of T_n(x) in [-1, 1]. This minimal deviation follows directly from the equioscillation property, as any other monic polynomial of degree n would have a larger maximum norm by the theorem's characterization. The theorem extends to arbitrary compact intervals via an affine transformation that maps [a, b] to [-1, 1]. Specifically, define the linear map x = \frac{b-a}{2} t + \frac{a+b}{2} for t \in [-1, 1]; then the best approximation on [a, b] corresponds to the best approximation on [-1, 1] under this change of variables, preserving the equioscillation structure after rescaling. Pafnuty Chebyshev first developed this theorem in his 1854 study of mechanical linkages, where minimizing deviations in approximate representations required optimal polynomial approximations; he applied the equioscillation idea to ensure the smallest possible maximum error in such systems. A concrete illustration arises in approximating the monomial x^{n+1} on [-1, 1] by polynomials of degree less than or equal to n. The error in the best uniform approximation is proportional to T_{n+1}(x), which equioscillates at n+2 points: the points \cos\left(\frac{k\pi}{n+1}\right) for k = 0, 1, \dots, n+1, attaining \pm 1 alternately. This confirms the optimality via the theorem, with the leading coefficient determining the exact constant of proportionality.

Special representations

As determinants

Chebyshev polynomials of the first kind T_n(x) can be represented as the determinant of an n \times n tridiagonal matrix derived from their three-term recurrence relation T_n(x) = 2x T_{n-1}(x) - T_{n-2}(x), with initial conditions T_0(x) = 1 and T_1(x) = x. The matrix has x in the (1,1) entry, $2x in all other diagonal entries, and -1 in the off-diagonal entries (sub- and super-diagonals). This structure accommodates the initial condition at degree 1 while enforcing the recurrence for higher degrees through the linear algebra of the tridiagonal form. For example, when n=3, the matrix is \begin{pmatrix} x & -1 & 0 \\ -1 & 2x & -1 \\ 0 & -1 & 2x \end{pmatrix}, and its determinant is x(4x^2 - 1) - 2x = 4x^3 - 3x = T_3(x). This explicit computation illustrates how the modified (1,1) entry shifts the scaling to match the leading coefficient $2^{n-1} of T_n(x). Chebyshev polynomials of the second kind U_n(x) follow a similar tridiagonal structure from their recurrence U_n(x) = 2x U_{n-1}(x) - U_{n-2}(x), with U_0(x) = 1 and U_1(x) = 2x. These determinant representations stem from the broader theory of , where the polynomials are the characteristic polynomials of Jacobi (tridiagonal) matrices encoding the recurrence coefficients. For Chebyshev polynomials, the zero diagonal shifts (beyond the linear term) reflect their symmetry with respect to the origin. Additionally, the recurrences link to continued fraction expansions of the orthogonalizing measures—such as (1 - x^2)^{-1/2} \, dx for T_n(x) and \sqrt{1 - x^2} \, dx for U_n(x)—where the convergents' denominators are these determinants, providing a bridge to moment problems and spectral theory. Chebyshev polynomials of the first kind T_n(x) arise as a special case of the P_n^{(\alpha, \beta)}(x) when the parameters are set to \alpha = \beta = -1/2. Specifically, P_n^{(-1/2, -1/2)}(x) = \frac{(2n)!}{2^{2n} (n!)^2} T_n(x), establishing a direct proportional relationship up to a scaling factor that depends on n. Similarly, the Chebyshev polynomials of the second kind U_n(x) correspond to with \alpha = \beta = 1/2, given by U_n(x) = (n+1) \frac{2^{2n} (n!)^2}{(2n+1)!} P_n^{(1/2, 1/2)}(x). The Legendre polynomials, another classical family, are likewise special cases of Jacobi polynomials with parameters \alpha = \beta = 0, where P_n(x) = P_n^{(0,0)}(x). Thus, Chebyshev polynomials connect to Legendre polynomials through the broader Jacobi framework, with the relation emerging as a limit of the Jacobi parameters approaching zero from the Chebyshev-specific values \alpha, \beta \to 0. This positioning highlights their shared role within the hierarchy of classical orthogonal polynomials. Both Chebyshev and Jacobi polynomials admit representations in terms of the Gauss hypergeometric function {}_2F_1. The explicit form for the Chebyshev polynomial of the first kind is T_n(x) = {}_2F_1\left( -n, n; \frac{1}{2}; \frac{1 - x}{2} \right), while the general Jacobi polynomial is P_n^{(\alpha, \beta)}(x) = \frac{(\alpha + 1)_n}{n!} {}_2F_1\left( -n, n + \alpha + \beta + 1; \alpha + 1; \frac{1 - x}{2} \right). Substituting \alpha = \beta = -1/2 into the Jacobi expression recovers the Chebyshev form, underscoring their unified hypergeometric structure. In the discrete domain, Hahn polynomials serve as a generalization, reducing to discrete analogs of Chebyshev polynomials under specific parameter choices. The discrete Chebyshev polynomials of the first kind t_n(x, N) are obtained as Hahn polynomials Q_n(x; -1, -1, N), and those of the second kind as Q_n(x; 0, 0, N), where N is the discrete support size. This connection embeds Chebyshev polynomials within the Askey scheme's discrete branch. At the apex of the Askey scheme, Wilson polynomials encompass the continuous classical families through limiting processes. Taking the limit q \to 1 in the q-Wilson polynomials yields the Wilson polynomials, from which further limits—such as reducing parameters to recover —include the Chebyshev polynomials as special cases. This hierarchical limit relation positions Chebyshev polynomials as foundational within the full spectrum of hypergeometric orthogonal polynomials.

Examples and computations

Low-degree polynomials of the first kind

The Chebyshev polynomials of the first kind, denoted T_n(x), are defined for low degrees explicitly as follows. For degree 0:
T_0(x) = 1
For degree 1:
T_1(x) = x
For degree 2:
T_2(x) = 2x^2 - 1
For degree 3:
T_3(x) = 4x^3 - 3x
For degree 4:
T_4(x) = 8x^4 - 8x^2 + 1
For degree 5:
T_5(x) = 16x^5 - 20x^3 + 5x
On the interval [-1, 1], these polynomials satisfy |T_n(x)| \leq 1, exhibiting n half-oscillations between -1 and 1, with the amplitude and frequency of oscillations increasing with n.

Low-degree polynomials of the second kind

The Chebyshev polynomials of the second kind U_n(x) are defined for low degrees by the following explicit expressions, derived from their generating function or recurrence relation.
Degree nPolynomial U_n(x)
0$1
1$2x
2$4x^2 - 1
3$8x^3 - 4x
4$16x^4 - 12x^2 + 1
5$32x^5 - 32x^3 + 6x
These polynomials, when graphed over the interval [-1, 1], display n zeros located inside the interval at points x_k = \cos\left( \frac{k \pi}{n+1} \right) for k = 1, \dots, n. The graphs show increasing amplitudes toward the endpoints, with |U_n(\pm 1)| = n+1, reflecting their trigonometric representation U_n(\cos \theta) = \frac{\sin((n+1)\theta)}{\sin \theta}.

Applications as basis functions

Expansion of functions

Chebyshev polynomials of the first kind provide a basis for expanding functions on the interval [-1, 1] through the Fourier-Chebyshev series, which leverages their orthogonality with respect to the weight function w(x) = (1 - x^2)^{-1/2}. For a square-integrable function f on this interval, the expansion takes the form f(x) = \sum_{k=0}^{\infty} a_k T_k(x), where the coefficients a_k are uniquely determined by the orthogonality property \int_{-1}^{1} T_m(x) T_n(x) w(x) \, dx = \pi \delta_{mn} for m = n = 0 and \frac{\pi}{2} \delta_{mn} for m = n \geq 1. The explicit formulas for these coefficients are a_0 = \frac{1}{\pi} \int_{-1}^{1} \frac{f(x)}{\sqrt{1 - x^2}} \, dx, \quad a_k = \frac{2}{\pi} \int_{-1}^{1} \frac{f(x) T_k(x)}{\sqrt{1 - x^2}} \, dx \quad (k \geq 1). These integrals arise directly from projecting f onto the using the inner product defined by the weight w(x). For functions f that are continuous on [-1, 1], the series converges uniformly to f(x) across the entire interval. In cases where f is piecewise continuous but has jump discontinuities within or at the endpoints of [-1, 1], the partial sums of the series exhibit the Gibbs phenomenon near these points, manifesting as oscillations with an overshoot amplitude of approximately 8.95% of the jump height, analogous to the behavior in Fourier series. This effect persists regardless of the truncation level but diminishes away from the discontinuities. To compute the coefficients numerically for practical approximations, the discrete cosine transform (DCT) offers an efficient method, as the Chebyshev expansion coefficients correspond to DCT coefficients of f sampled at Chebyshev points, enabling fast evaluation via the fast Fourier transform (FFT) in O(n \log n) operations for n terms. For a truncated series \sum_{k=0}^{N} a_k T_k(x), the approximation error satisfies |f(x) - p_N(x)| \leq \sum_{k=N+1}^{\infty} |a_k| for all x \in [-1, 1], with the tail sum providing a computable bound; moreover, for analytic f in an elliptic region containing [-1, 1], the coefficients decay geometrically as |a_k| \sim \rho^{-k} for some \rho > 1, yielding rates superior to algebraic rates for less functions.

Chebyshev series and partial sums

A Chebyshev series expansion of a f(x) on [-1, 1] is given by f(x) = \sum_{k=0}^\infty a_k T_k(x), where T_k(x) are the Chebyshev polynomials of the first kind and the coefficients a_k are determined by relations. The partial sum S_N(f)(x) = \sum_{k=0}^N a_k T_k(x) serves as a polynomial approximation of degree N to f(x), providing a practical for computational purposes. This captures the dominant low-frequency components of f, with the f(x) - S_N(f)(x) decreasing rapidly for analytic functions due to the of Chebyshev series. Evaluating the partial sum directly by computing successive powers of x is inefficient, as it requires O(N^2) operations. The Clenshaw algorithm addresses this by exploiting the three-term recurrence relation of Chebyshev polynomials, T_{k+1}(x) = 2x T_k(x) - T_{k-1}(x), to compute the sum via a backward recurrence: initialize b_{N+2} = b_{N+1} = 0, then for k = N, \dots, 0, set b_k = a_k + 2x b_{k+1} - b_{k+2}, yielding S_N(f)(x) = \frac{1}{2} (a_0 + b_0 - b_2) (with b_2 = 0 if N < 2). This method achieves O(N) complexity and is numerically stable for well-conditioned coefficients. For functions with symmetry, economy series reduce the number of terms needed. If f(x) is even, all odd-order coefficients a_k = 0 for odd k, since odd T_k(x) are odd functions and their integral against the even weight (1 - x^2)^{-1/2} vanishes; the series then involves only even T_k(x), which can be re-expressed as T_{2n}(x) = T_n(y) where y = 2x^2 - 1, halving the effective degree. Similarly, for odd f(x), only odd terms appear, and the series can be rewritten for f(x)/x (an even function) then scaled by x. These reductions exploit parity to improve efficiency without loss of approximation power. In discrete settings, such as when coefficients are computed from samples at Chebyshev points, aliasing arises because the discrete inner product aliases higher-degree polynomials onto lower ones. Specifically, polynomials of degree greater than N in a degree-N expansion project onto the span of \{T_0, \dots, T_N\}, causing high-frequency components to fold into low-frequency modes; for example, T_{N+k}(x) aliases to a linear combination of T_k(x) and lower terms under the discrete orthogonality at N+1 points. This phenomenon, analogous to Nyquist aliasing in Fourier methods, can introduce oscillations or attenuate high modes but is mitigated by over-sampling or filtering. The partial sum S_N(f) provides a near-minimax approximation in the uniform norm, meaning its maximum error \|f - S_N(f)\|_\infty is close to the best possible uniform approximation error E_N(f) by any degree-N polynomial. For analytic f, the truncation error satisfies \|f - S_N(f)\|_\infty \leq \frac{2}{\pi} \sum_{k=N+1}^\infty |a_k|, and under mild conditions, \|f - S_N(f)\|_\infty / E_N(f) \to 1 as N \to \infty, though it is not exactly minimax unless adjusted (e.g., by scaling the leading coefficient). This property makes Chebyshev partial sums preferable for uniform approximation tasks over other orthogonal expansions.

Spectral methods overview

Spectral methods employ as basis functions to approximate solutions to differential equations, particularly on bounded domains like [-1,1], by expanding the solution in a series of these orthogonal polynomials and enforcing the equation through projection or interpolation techniques. Two prominent approaches are the and the , which leverage the minimax properties and orthogonality of Chebyshev polynomials to achieve high accuracy. In the collocation method, the solution is approximated as a truncated Chebyshev series, and the differential equation is enforced exactly at specific collocation points, typically the Chebyshev-Gauss-Lobatto points (extrema of the highest-degree polynomial) or Chebyshev-Gauss points (roots), which cluster near the boundaries to resolve boundary layers effectively in boundary value problems. This interpolation-based enforcement leads to a system of algebraic equations solved for the expansion coefficients, with boundary conditions incorporated directly at the endpoints. The tau method, originally formulated by Lanczos, projects the residual of the differential equation onto the space of lower-degree Chebyshev polynomials, requiring orthogonality to polynomials up to degree N-2 (for a second-order equation), while using the remaining conditions to satisfy boundary constraints through tau parameters that perturb the equation slightly. This Galerkin-like projection exploits the discrete orthogonality of Chebyshev polynomials, resulting in a banded linear system for the coefficients. These methods offer spectral accuracy, where the error decreases exponentially with the polynomial degree N for smooth solutions, far surpassing algebraic convergence of finite difference schemes, due to the global nature of the polynomial basis and the rapid decay of Chebyshev coefficients for analytic functions. Exponential convergence is particularly pronounced for problems with smooth data, enabling high-fidelity simulations with modest N. A representative setup involves solving the boundary value problem -u''(x) + u(x) = f(x) on [-1,1] with u(\pm 1)=0, where the approximate solution is expanded as u_N(x) = \sum_{k=0}^N a_k T_k(x), but coefficients a_0 and a_1 are adjusted to enforce the homogeneous Dirichlet boundaries, since T_k(\pm 1) = (\pm 1)^k, with the equation projected or collocated accordingly. In modern applications, Chebyshev spectral methods are widely used in computational fluid dynamics for simulating viscous flows in channels or pipes, benefiting from efficient handling of non-periodic boundaries. They also appear in quantum mechanics, such as solving the radial Schrödinger equation for the hydrogen atom on a mapped semi-infinite domain, where the polynomial basis captures the exponential decay of wave functions accurately.

Modified Chebyshev polynomials

Modified Chebyshev polynomials encompass variants of the standard Chebyshev polynomials tailored for specific intervals, weight functions, or numerical stability requirements, often through affine transformations or multiplicative damping factors. The shifted Chebyshev polynomials of the first kind, denoted T_n^*(x), are obtained via an affine transformation of the standard polynomials and defined as T_n^*(x) = T_n(2x - 1) for x \in [0,1]. These polynomials maintain orthogonality on the interval [0,1] with respect to the weight function w(x) = 1 / \sqrt{x(1-x)}. The roots of T_n^*(x) are explicitly given by x_k = \frac{1 + \cos \left( \frac{(2k-1)\pi}{2n} \right)}{2} for k = 1, 2, \dots, n, which cluster near the endpoints of [0,1] to minimize interpolation errors in numerical approximations. Modified Chebyshev polynomials of the first kind for even orders arise in contexts requiring adjusted boundary conditions, such as in quantum mechanics where they model absorption to stabilize simulations of wave propagation and system evolution. For instance, in quantum conductance calculations, these even-order variants incorporate complex absorbing potentials to enhance computational efficiency while preserving accuracy in open quantum systems. Damped Chebyshev polynomials, typically of the form e^{-a x^2} T_n(x), adapt the basis to Gaussian weights e^{-a x^2}, enabling orthogonal expansions suited to problems with Gaussian-distributed uncertainties or damping. These variants find applications in econometrics for multi-variable regression, where modified Chebyshev polynomials of class 2 support cascade regression and variable pruning to yield robust principal component models with improved predictive performance. In signal processing, shifted and modified forms are employed for approximations on non-symmetric intervals, facilitating distributed processing of graph signals and efficient filtering in sensor networks.

Factorizations and irreducibility

The Chebyshev polynomials of the first kind, T_n(x), are irreducible over the field of rational numbers \mathbb{Q} if and only if n = 2^k for some nonnegative integer k. For general n, T_n(x) factors into irreducible polynomials over \mathbb{Q}, with the irreducible factors being the minimal polynomials of $2\cos(2\pi/m) for certain integers m related to the divisors of n. Specifically, the factorization is given by $2 T_n\left( \frac{x}{2} \right) = \prod_{\substack{d \mid n \\ n/d \ odd}} \Psi_{4d}(x), where \Psi_m(x) denotes the m-th cosine polynomial, which is the minimal polynomial over \mathbb{Q} for $2\cos(2\pi/m), obtained from the m-th \Phi_m(z) via the substitution x = z + 1/z. This relation connects the factorization directly to , as each \Psi_m(x) has degree \phi(m)/2 (for m > 2), where \phi is , reflecting the index of the real subfield in the \mathbb{Q}(\zeta_m). For the Chebyshev polynomials of the second kind, U_n(x), the roots are explicitly \cos(k\pi/(n+1)) for k = 1, \dots, n. Algebraically, over \mathbb{Q}, U_n(x) factors into irreducible components related to cyclotomic polynomials via a similar . The factorization involves a product of cosine polynomials \Psi_d(x) over divisors d of $2(n+1) with d > 2, excluding low-degree factors \Psi_1 and \Psi_2. The degrees of these irreducible factors follow from \phi(d)/2. Unlike T_n(x), U_n(x) is generally reducible for n \geq 2. The of T_n(x) over \mathbb{Q} is the maximal real subfield of the $2n-th \mathbb{Q}(\zeta_{2n}), and its is isomorphic to (\mathbb{Z}/2n\mathbb{Z})^\times / \{\pm 1\}. In particular, when n = p is an odd prime, the of the splitting field of T_p(x) is cyclic of order (p-1)/2, reflecting the cyclic nature of (\mathbb{Z}/p\mathbb{Z})^\times modulo the action of complex conjugation. For U_n(x), the splitting field similarly embeds into \mathbb{Q}(\zeta_{2(n+1)}), with structure determined by the divisors of n+1. These Galois theoretic properties underscore the deep connection between Chebyshev polynomials and cyclotomic extensions.