Fact-checked by Grok 2 weeks ago

Orthogonal polynomials

Orthogonal polynomials constitute a sequence of polynomials \{P_n(x)\}_{n=0}^\infty, where each P_n(x) is a polynomial of exact n with a positive leading coefficient, that are orthogonal with respect to an inner product defined by a positive w(x) on a finite or infinite interval I \subseteq \mathbb{R}, satisfying \int_I P_n(x) P_m(x) w(x) \, dx = 0 for n \neq m and often normalized such that \int_I [P_n(x)]^2 w(x) \, dx = 1. This orthogonality condition arises from the L^2 inner product \langle f, g \rangle_w = \int_I f(x) g(x) w(x) \, dx, where w(x) > 0 ensures the polynomials form an for the space of square-integrable functions with respect to this measure. These polynomials, first systematically studied in the 19th century by mathematicians such as , , and Carl Gustav Jacobi, form the foundation of classical families including the Legendre, Hermite, Laguerre, Jacobi, and Chebyshev polynomials, each associated with specific weight functions and intervals (e.g., on [-1, 1] with w(x) = 1). Their development was advanced by contributions from , Eduard Heine, and Thomas Stieltjes, culminating in Gábor Szegő's comprehensive 1939 monograph that established much of the modern theory. Orthogonal polynomials satisfy a three-term of the form P_n(x) = (A_n x + B_n) P_{n-1}(x) - C_n P_{n-2}(x) with A_n > 0 and C_n > 0, which facilitates their computation and analysis. Additionally, they possess n real, distinct zeros within the interval of orthogonality, which interlace between consecutive polynomials, a property crucial for applications in root-finding and . The significance of orthogonal polynomials extends across pure and , serving as essential tools in approximation theory for expanding functions in series (e.g., Fourier-Legendre series), for Gaussian quadrature rules that exactly integrate polynomials up to degree $2n-1, and for solving Sturm-Liouville eigenvalue problems and quantum mechanical systems like the (via ). They also connect to such as hypergeometric and , continued fractions via Stieltjes transforms, and moment problems in probability and statistics. In contemporary research, extensions to multiple orthogonal polynomials and non-Hermitian variants have found applications in integrable systems, random matrix theory, and . The Christoffel-Darboux formula further underscores their utility, providing a closed-form for summation: \sum_{k=0}^n \frac{P_k(x) P_k(y)}{h_k} = \frac{k_n}{k_{n+1} h_n} \frac{P_{n+1}(x) P_n(y) - P_n(x) P_{n+1}(y)}{x - y}, where h_k = \int_I [P_k(x)]^2 w(x) \, dx.

Introduction and Fundamentals

Historical Development

The study of orthogonal polynomials originated in the late 18th century amid efforts to expand functions in series for solving physical problems in and gravitation. In 1782, introduced the generating function approach in his analysis of planetary perturbations, laying the groundwork for the polynomials later formalized by others. This work connected series expansions to , motivating subsequent developments in orthogonal systems. In 1782, developed the in his memoir on the attraction of homogeneous spheroids subjected to gravity, applying them directly to expansions of gravitational potentials in spherical coordinates. 's contributions emphasized their role in solving for axisymmetric problems, marking the first systematic use of such polynomials in . The 19th century saw significant advancements through Pafnuty Chebyshev's investigations into polynomial approximation. In 1859, Chebyshev explored minimax properties of polynomials for discrete measures, establishing foundational results on best uniform approximations and introducing discrete orthogonal analogs that influenced later and techniques. Early 20th-century progress came from David Hilbert's work on integral equations, where around 1907 he developed methods for orthogonal expansions in infinite-dimensional settings, integrating them into the emerging theory of Hilbert spaces to address Fredholm-type problems. This systematization shifted focus from specific cases to general abstract frameworks, enabling applications in . Key figures and advanced the theory in the , with their collaborative 1925 volume on problems incorporating early treatments of orthogonal polynomials and their connections to complex variables and inequalities. 1939 monograph provided the first comprehensive exposition, synthesizing properties, asymptotics, and applications across branches like continued fractions and moment problems. Post-World War II developments extended classical theory to generalizations, including multivariate cases. In 1975, Tom H. Koornwinder introduced two-variable analogs of , constructing orthogonal systems on non-standard domains and bridging to hypergeometric functions. The field evolved from these classical roots into modern applications in and , with contributions from diverse regions; for instance, Indian mathematician Ambikeshwar Sharma advanced approximation theory post-1950 through studies on lacunary and orthogonal expansions.

Definition and Orthogonality Condition

Orthogonal polynomials arise in the context of Hilbert spaces of square-integrable functions. Consider the space L^2(\mu) consisting of measurable functions f on \mathbb{R} (or a subset thereof) such that \|f\|^2 = \int f(x)^2 \, d\mu(x) < \infty, where \mu is a positive Borel measure with finite moments \int |x|^n \, d\mu(x) < \infty for all n \in \mathbb{N}. This space is complete with respect to the norm induced by the inner product \langle f, g \rangle = \int f(x) g(x) \, d\mu(x), ensuring that Cauchy sequences converge to elements within the space, which is essential for series expansions and approximation properties. A sequence of polynomials \{P_n(x)\}_{n=0}^\infty is called orthogonal with respect to \mu if each P_n has exact degree n and satisfies the orthogonality condition \langle P_m, P_n \rangle = 0 for all m \neq n, with \langle P_n, P_n \rangle = h_n > 0. The measure \mu can be absolutely continuous, with density w(x) \geq 0 so that d\mu(x) = w(x) \, dx over an interval, or more generally real and positive, encompassing discrete measures supported on countable points \{x_k\} with masses w_k > 0 (where \langle f, g \rangle = \sum_k f(x_k) g(x_k) w_k) and singular continuous measures without densities, such as those supported on Cantor sets. Normalization of the sequence can be chosen in various ways: monic polynomials have leading coefficient 1; orthonormal polynomials satisfy h_n = 1; other classical scalings fix specific values like P_n(1) = 1. Given a fixed positive measure \mu and , the orthogonal polynomials are unique up to a nonzero scalar multiple for each , as they are obtained by orthogonalizing the monomial basis via processes like Gram-Schmidt. However, for non-classical measures, the underlying measure may not be uniquely determined by the moments in some indeterminate cases (e.g., Stieltjes ). Modern extensions post-2000 consider with respect to signed or complex measures, where the inner product may involve complex weights, leading to polynomials whose zeros can fill regions in the , as analyzed via Riemann-Hilbert methods for rotationally symmetric potentials.

Construction Methods

One primary method for constructing orthogonal polynomials with respect to a given positive measure \mu on the real line is the Gram-Schmidt orthogonalization process applied to the \{1, x, x^2, \dots \}. This iterative procedure generates a sequence of polynomials P_n(x) that satisfy the orthogonality condition \langle P_m, P_n \rangle = \int P_m(x) P_n(x) \, d\mu(x) = 0 for m \neq n. At each step, the monomial x^n is projected onto the span of the previous orthogonal polynomials and subtracted to yield P_n(x) = x^n - \sum_{k=0}^{n-1} \frac{\langle x^n, P_k \rangle}{\langle P_k, P_k \rangle} P_k(x), with P_0(x) = 1. This approach ensures the polynomials are monic if desired and directly incorporates the measure through the inner products, making it versatile for arbitrary measures. A complementary theoretical and computational method uses the moments of the measure, defined as \mu_k = \int x^k \, d\mu(x) for k = 0, 1, 2, \dots. These moments populate the entries of Hankel matrices H_n = (\mu_{i+j})_{0 \leq i,j \leq n-1}, and the coefficients of the monic orthogonal polynomials P_n(x) = x^n + \sum_{k=0}^{n-1} a_{n,k} x^k can be solved via linear systems H_n \mathbf{c}_n = -\mathbf{h}_n, where \mathbf{c}_n collects the coefficients a_{n,k} and \mathbf{h}_n is the last column of H_n excluding the bottom entry. Determinant-based expressions, such as a_{n,n-1} = -\det(H_n)/\det(H_{n-1}), provide explicit formulas, though numerical implementation often favors the linear solve for efficiency. Favard's theorem guarantees that any sequence of monic polynomials satisfying a three-term recurrence x P_n(x) = P_{n+1}(x) + \alpha_n P_n(x) + \beta_n P_{n-1}(x) with \beta_n > 0 corresponds to orthogonal polynomials for a unique positive measure, linking moment computations to recurrence-based generation. The approach offers an analytic framework for construction, particularly suited to measures with known closed forms. A , such as the ordinary G(x,t) = \sum_{n=0}^\infty P_n(x) t^n or the variant \sum_{n=0}^\infty P_n(x) \frac{t^n}{n!}, is derived from the of the measure or by solving associated differential equations. For instance, functions like K(x,t) = \sum_{n=0}^\infty \frac{P_n(x) P_n(t)}{h_n} (with h_n = \langle P_n, P_n \rangle) reproduce the polynomials via coefficients extraction, enabling explicit construction when the kernel is available. This method shines in theoretical derivations for families with hypergeometric representations, though numerical extraction requires care for . Numerical implementations of these methods, especially Gram-Schmidt and moment-based, encounter stability challenges due to , where accumulated rounding errors lead to loss of and ill-conditioned Hankel matrices for high . The classical Gram-Schmidt exacerbates this through subtractive cancellation in projections, but the modified Gram-Schmidt variant improves robustness by orthogonalizing each new vector against all previous ones sequentially, reducing error propagation. Additional safeguards, like selective reorthogonalization or Stieltjes procedures for moment matching, ensure backward stability, with error bounds scaling as O(n \epsilon \kappa), where n is the , \epsilon machine precision, and \kappa the of the moment matrix. Addressing high-degree computations in applications like methods, recent algorithms (post-2020) integrate the (FFT) for efficient evaluation and implicit construction of orthogonal polynomials up to degrees exceeding $10^4. These leverage asymptotic expansions or non-uniform FFTs to approximate integrals defining inner products in O(n \log n) time, outperforming direct methods by orders of magnitude while maintaining accuracy; for example, FFT-based Clenshaw-Curtis quadrature enables stable generation via discretized moments for non-classical measures in .

Classical Examples

Continuous Orthogonal Polynomials

Continuous orthogonal polynomials form a of orthogonal polynomials defined with respect to absolutely continuous measures on real intervals, often unbounded or semi-bounded, where the weight functions ensure the integrals converge. These polynomials satisfy the condition \int_a^b p_m(x) p_n(x) w(x) \, dx = h_n \delta_{mn}, with w(x) the weight function, [a, b] the interval of , and h_n > 0 the normalization constant. The classical families—Legendre, Hermite, Laguerre, and Jacobi—emerge as solutions to Sturm-Liouville problems and are characterized by their explicit representations, including formulas. The Legendre polynomials P_n(x) are defined on the interval [-1, 1] with uniform weight function w(x) = 1. They admit the P_n(x) = \frac{1}{2^n n!} \frac{d^n}{dx^n} (x^2 - 1)^n. Their orthogonality relation is \int_{-1}^1 P_m(x) P_n(x) \, dx = \frac{2}{2n + 1} \delta_{mn}, where \delta_{mn} is the . Legendre polynomials play a key role in applications such as the expansion of in and . The Hermite polynomials H_n(x), referred to as the physicist's Hermite polynomials, are defined on (-\infty, \infty) with weight function w(x) = e^{-x^2}. The Rodrigues formula for them is H_n(x) = (-1)^n e^{x^2} \frac{d^n}{dx^n} e^{-x^2}. A probabilist's variant, \mathit{He}_n(x) = 2^{-n/2} H_n(x / \sqrt{2}), uses the scaled weight e^{-x^2/2} and arises in probability theory for expansions related to the normal distribution. Both versions satisfy orthogonality on the real line, with the physicist's form integral \int_{-\infty}^\infty H_m(x) H_n(x) e^{-x^2} \, dx = \sqrt{\pi} \, 2^n n! \, \delta_{mn}. The Laguerre polynomials L_n(x) are defined on [0, \infty) with weight function w(x) = e^{-x}. An explicit representation is L_n(x) = \sum_{k=0}^n (-1)^k \binom{n}{k} \frac{x^k}{k!}. Generalized L_n^{(\alpha)}(x), for \alpha > -1, extend this family with weight w(x) = x^\alpha e^{-x} and appear in for the radial wavefunctions. Their is \int_0^\infty L_m^{(\alpha)}(x) L_n^{(\alpha)}(x) x^\alpha e^{-x} \, dx = \frac{\Gamma(n + \alpha + 1)}{n!} \delta_{mn}. The Jacobi polynomials P_n^{(\alpha, \beta)}(x) provide a broad generalization, defined on [-1, 1] with weight function w(x) = (1 - x)^\alpha (1 + x)^\beta for parameters \alpha, \beta > -1. They satisfy the Rodrigues formula P_n^{(\alpha, \beta)}(x) = \frac{(-1)^n}{2^n n!} (1 - x)^{-\alpha} (1 + x)^{-\beta} \frac{d^n}{dx^n} \left[ (1 - x)^{n + \alpha} (1 + x)^{n + \beta} \right]. Special cases include the ultraspherical (Gegenbauer) polynomials C_n^{(\lambda)}(x) when \alpha = \beta = \lambda - 1/2, which are symmetric and used in hyperspherical harmonics. The orthogonality integral is \int_{-1}^1 P_m^{(\alpha, \beta)}(x) P_n^{(\alpha, \beta)}(x) (1 - x)^\alpha (1 + x)^\beta \, dx = \frac{2^{\alpha + \beta + 1} \Gamma(m + \alpha + 1) \Gamma(m + \beta + 1)}{m! (2m + \alpha + \beta + 1) \Gamma(m + \alpha + \beta + 1)} \delta_{mn}. In recent developments post-2015, continuous orthogonal polynomials have been integrated into , particularly for designing temporal kernels in neural networks and orthogonal random features to enhance .

Discrete Orthogonal Polynomials

orthogonal polynomials are defined with respect to discrete measures supported on finite or lattices, typically the non-negative integers or a finite like {0, 1, \dots, N}, and satisfy orthogonality conditions via summation rather than . These polynomials arise in contexts such as combinatorial analysis, , and approximation on grids, where the weight functions often correspond to probability distributions like , , or negative binomial. The Hahn class encompasses key families including Hahn, dual Hahn, Charlier, Meixner, and Krawtchouk polynomials, classified within the Askey scheme of hypergeometric orthogonal polynomials. Hahn polynomials Q_n(x; \alpha, \beta, N), defined for x = 0, 1, \dots, N with parameters \alpha > -1, \beta > -1, and integer N \geq 0, are expressed as Q_n(x; \alpha, \beta, N) = {}_3F_2\left(-n, n + \alpha + \beta + 1, -x; \alpha + 1, -N; 1\right), and are orthogonal with respect to the hypergeometric weight w(x) = \binom{x + \alpha}{x} \binom{N - x + \beta}{N - x}. They serve as discrete analogs of and find applications in equations and . Dual Hahn polynomials, obtained as a limit or dual transformation of Hahn polynomials, act as the discrete counterpart to continuous Hahn polynomials and are orthogonal on the same finite with weights involving squared Hahn factors, emphasizing their role in of difference operators. Charlier polynomials C_n(x; a), for a > 0 and x = 0, 1, \dots, \infty, are orthogonal with respect to the Poisson distribution weight w(x) = e^{-a} \frac{a^x}{x!}, and admit the explicit summation form C_n(x; a) = \sum_{k=0}^n \binom{n}{k} (-1)^{n-k} \frac{(x)_k}{a^k}, where (x)_k = x(x-1)\cdots(x-k+1) denotes the falling factorial. This representation highlights their combinatorial structure, linking to expansions in Poisson processes and queueing theory. Meixner polynomials M_n(x; \beta, c), with \beta > 0 and $0 < c < 1, are orthogonal on x = 0, 1, \dots, \infty against the negative binomial weight w(x) = \frac{(\beta)_x c^x}{x!}, where (\beta)_x is the Pochhammer symbol, and are given by M_n(x; \beta, c) = {}_2F_1(-n, -x; \beta; 1 - 1/c). They model waiting times in stochastic processes and appear in birth-death models. Krawtchouk polynomials K_n(x; p, N), for $0 < p < 1 and integer N \geq 0, are defined on x = 0, 1, \dots, N with the binomial weight w(x) = \binom{N}{x} p^x (1-p)^{N-x}, and take the form K_n(x; p, N) = {}_2F_1(-n, -x; -N; 1/p). Their orthogonality relation is \sum_{x=0}^N K_m(x; p, N) K_n(x; p, N) \binom{N}{x} p^x (1-p)^{N-x} = \delta_{mn} h_n, with h_n = \binom{N}{n} p^n (1-p)^n. For the symmetric case p = 1/2, h_n = \binom{N}{n} (1/4)^n. making them useful for expansions on the hypercube and in coding theory. In the binary case, they facilitate Fourier analysis over {0,1}^N. q-Analogs of these polynomials, developed in the 1980s and 1990s, extend the Askey scheme to the q-Askey scheme, incorporating and for weights like or . The discrete q-polynomials in the q-Hahn class include , , , , and families, orthogonal on q-lattices with measures involving (q; q)_x and parameters satisfying $0 < q < 1. At the apex of this hierarchy lie the , a four-parameter family of basic hypergeometric orthogonal polynomials that unify all discrete and continuous q-analogs through limits and specializations, influencing quantum algebra and integrable systems. These discrete families relate to continuous orthogonal polynomials via limiting processes, such as the approaching as the Poisson parameter diverges.

Core Properties

Moment Relations and Generating Functions

Orthogonal polynomials are intimately connected to the moments of the underlying orthogonality measure \mu, defined on an interval (a, b), through the sequence \mu_k = \int_a^b x^k \, d\mu(x) for k = 0, 1, 2, \dots. These moments encode the distribution of the measure and form the basis for constructing the polynomials via processes like . The Hankel determinants, constructed from these moments, play a central role in determining the norms of the orthogonal polynomials. Specifically, the nth Hankel determinant is given by \Delta_n = \det\begin{pmatrix} \mu_0 & \mu_1 & \cdots & \mu_{n-1} \\ \mu_1 & \mu_2 & \cdots & \mu_n \\ \vdots & \vdots & \ddots & \vdots \\ \mu_{n-1} & \mu_n & \cdots & \mu_{2n-2} \end{pmatrix}, with \Delta_0 = 1. For monic orthogonal polynomials p_n(x) of degree n, the squared norm h_n = \int_a^b [p_n(x)]^2 \, d\mu(x) satisfies h_n = \Delta_n / \Delta_{n-1}. This relation ensures the positivity of the determinants for positive measures and provides a recursive way to compute the scaling factors in the polynomial sequence. Powers of x can be expanded in the basis of orthogonal polynomials as x^n = \sum_{k=0}^n c_{n k} P_k(x), where the coefficients c_{n k} are determined by the moments via the orthogonality relations: c_{n k} = \frac{1}{h_k} \int_a^b x^n P_k(x) \, d\mu(x). These coefficients facilitate the inversion of the moment problem and are crucial for applications in , as they allow expressing monomials in terms of the orthogonal basis efficiently. Generating functions provide compact representations for sequences of orthogonal polynomials, enabling derivations of recurrence relations and asymptotic behaviors. For the Legendre polynomials P_n(x) orthogonal on [-1, 1] with respect to the Lebesgue measure, the ordinary generating function is G(x, t) = \sum_{n=0}^\infty P_n(x) t^n = \frac{1}{\sqrt{1 - 2 x t + t^2}}, \quad |t| < 1. In contrast, the Hermite polynomials H_n(x) on (-\infty, \infty) with Gaussian weight use an exponential generating function: \sum_{n=0}^\infty \frac{H_n(x)}{n!} t^n = e^{2 x t - t^2}. These functions are derived from the Rodrigues formulas or differential equations satisfied by the polynomials and are instrumental in probabilistic interpretations. Asymptotic relations for large n often involve the moment generating function or the weight's properties, as captured by Szegő's theorem. For orthogonal polynomials on the unit circle with weight w(e^{i\theta}), the theorem provides the strong asymptotics of the normalized polynomials \phi_n(z) \sim z^n / D(z) for |z| > 1, where D(z) is an related to the outer function of \log w. More generally, for measures on the real line, Szegő's results yield asymptotics for the Hankel determinants: \lim_{n \to \infty} D_n / D_{n-1} = [\Theta(f)]^{1/2}, where \Theta(f) is a functional of the weight f, describing the large-n growth of the norms and leading coefficients. These asymptotics are essential for understanding the distribution of zeros and convergence in L^2 spaces. Christoffel transformations modify the orthogonality measure by multiplying it with the square of a , thereby altering the moments to generate a new sequence of orthogonal polynomials. Formally, given a measure d\mu(x), the transformed measure is d\mu^{(n)}(x) = P_n(x)^2 \, d\mu(x) / h_n, where h_n = \int P_n^2 \, d\mu; this shifts the moments \mu_k^{(n)} = \int x^k P_n(x)^2 \, d\mu(x) / h_n. Originally introduced in 1858 for , developments in the 1990s extended these transformations to matrix-valued and multiple orthogonal polynomials, enabling connections to integrable systems and random matrix theory by systematically altering moment sequences for semiclassical weights.

Recurrence Relations

Orthogonal polynomials satisfy a fundamental three-term recurrence relation that allows sequential computation and reveals deep structural properties. For a sequence of orthogonal polynomials \{P_n(x)\}_{n=0}^\infty with respect to an inner product \langle \cdot, \cdot \rangle, where each P_n has leading coefficient k_n > 0, the relation takes the general form P_{n+1}(x) = (A_n x + B_n) P_n(x) - C_n P_{n-1}(x), where n \geq 1, P_{-1}(x) \equiv 0, P_0(x) \equiv 1, and the coefficients are A_n = k_{n+1}/k_n, B_n = -\langle x P_n, P_{n+1} \rangle / (k_{n+1} k_n \int x^{2n+1} w(x) dx) (or more precisely from projections), and C_n = (k_{n+1} k_n / k_n k_{n-1}) (h_n / h_{n-1}) adjusted for norms, but typically derived from the expansion of x P_n. This form arises from expanding x P_n(x) in the orthogonal basis and using the orthogonality condition to show that only the terms P_{n+1}, P_n, and P_{n-1} have nonzero projections. In the monic case, where each P_n(x) has leading coefficient 1, the recurrence simplifies to p_{n+1}(x) = (x - \alpha_n) p_n(x) - \beta_n p_{n-1}(x), with \alpha_n = \frac{\langle x p_n, p_n \rangle}{\|p_n\|^2} and \beta_n = \frac{\|p_n\|^2}{\|p_{n-1}\|^2}, linking directly to the . These coefficients ensure the relation holds for any orthogonal sequence, and the positivity of \beta_n > 0 guarantees the existence of a positive definite inner product. Favard's theorem provides a converse characterization: any sequence of polynomials satisfying a three-term recurrence of the form p_{n+1}(x) = (x - \alpha_n) p_n(x) - \beta_n p_{n-1}(x) with \beta_n > 0 for all n is with respect to some positive measure on the real line. The proof relies on constructing the inner product via the moments implied by the recurrence coefficients, ensuring the sequence satisfies the conditions. For classical orthogonal polynomials, the coefficients take explicit forms. For example, the Hermite polynomials \{H_n(x)\} satisfy H_{n+1}(x) = 2x H_n(x) - 2n H_{n-1}(x), \quad n \geq 1, with H_0(x) = 1 and H_1(x) = 2x, orthogonal with respect to the Gaussian weight e^{-x^2} on (-\infty, \infty). The recurrence relations offer superior numerical stability for evaluating orthogonal polynomials at high degrees compared to explicit formulas, which often suffer from subtractive cancellation and rounding errors. This stability facilitates efficient computation, as seen in adaptations of the Lanczos algorithm from the mid-20th century, which generates orthogonal polynomials via the recurrence for eigenvalue problems and has been refined in the 2010s with predictor-corrector schemes for broader measures.

Christoffel–Darboux Formula and Kernels

The Christoffel–Darboux expresses the finite sum of products of orthogonal polynomials in a compact form that reveals connections to recurrence relations and functions. For a sequence of monic orthogonal polynomials \{P_k(x)\}_{k=0}^\infty with respect to a positive measure \mu on the real line, where h_k = \int P_k(x)^2 \, d\mu(x) denotes the squared norms, the states that \sum_{k=0}^n \frac{P_k(x) P_k(y)}{h_k} = \frac{P_{n+1}(x) P_n(y) - P_n(x) P_{n+1}(y)}{h_n (x - y)} for x \neq y. This , originally derived by Darboux in his study of continued fractions and orthogonal expansions, facilitates efficient computation of sums and highlights the role of consecutive polynomials in the expansion. Taking the limit as y \to x, the formula yields an expression for the sum of squared basis functions: \sum_{k=0}^n \frac{P_k(x)^2}{h_k} = \frac{P_{n+1}'(x) P_n(x) - P_n'(x) P_{n+1}(x)}{h_n}, which quantifies the density of the onto polynomials of at most n at point x. This limiting case is particularly useful for analyzing the growth of the on the diagonal and for deriving inequalities in approximation theory. The sum on the left-hand side defines the reproducing for the of polynomials of at most n: K_n(x,y) = \sum_{k=0}^n \frac{P_k(x) P_k(y)}{h_k}. Substituting the Christoffel–Darboux identity gives the closed form K_n(x,y) = \frac{P_{n+1}(x) P_n(y) - P_n(x) P_{n+1}(y)}{h_n (x - y)}, which emphasizes the kernel's dependence on the highest- terms. In the L^2(\mu), this satisfies the reproducing property: for any f \in L^2(\mu), \langle f, K_n(\cdot, y) \rangle_{L^2(\mu)} = \sum_{k=0}^n \langle f, P_k / \sqrt{h_k} \rangle_{L^2(\mu)} \cdot \frac{P_k(y)}{\sqrt{h_k}} = \operatorname{proj}_n f(y), where \operatorname{proj}_n f is the of f onto the span of \{P_0, \dots, P_n\}. This property ensures that K_n(x,y) acts as the integral for the operator, enabling explicit representations of best approximations in the L^2 norm. In approximation theory, the Christoffel–Darboux underpins least-squares fitting by providing the minimizer \sum_{k=0}^n c_k P_k(x) where coefficients c_k = \langle f, P_k \rangle / h_k are recovered via the reproducing property, yielding error bounds like \|f - \operatorname{proj}_n f\|^2 = \|f\|^2 - \int K_n(x,x) f(x)^2 \, d\mu(x). For , the kernel facilitates barycentric forms of orthogonal polynomial interpolants, improving over direct Lagrange bases for high degrees, as the weights involve evaluations of K_n at nodes. The formula can be derived from the three-term satisfied by the orthogonal polynomials, using on n and telescoping the differences of consecutive terms. For large n, asymptotics of the scaled K_n(x,y) / K_n(x,x)^{1/2} K_n(y,y)^{1/2} exhibit universal behaviors, particularly in the bulk of the support of \mu, where it converges to the sine \sin(\pi (x-y)) / (\pi (x-y)) after appropriate rescaling; this scaling is linked to determinantal point processes in random matrix theory, where the governs eigenvalue correlations for ensembles like the Gaussian unitary ensemble. In , since the , the Christoffel–Darboux kernel has informed kernel methods for Gaussian processes by enabling orthogonal expansions of functions in finite-dimensional subspaces, facilitating scalable approximations for and in high-dimensional data.

Zeros, Interlacing, and

A fundamental property of orthogonal polynomials \{P_n\}_{n=0}^\infty associated with a positive \mu supported on a finite (a, b) \subset \mathbb{R} is that each P_n has exactly n distinct real zeros, all lying in (a, b). This result, a fundamental on the location of zeros, follows from the fact that the zeros are the eigenvalues of the Jacobi matrix associated with the three-term recurrence, which is tridiagonal with positive off-diagonals for positive measures. The zeros are simple because if a multiple zero existed at some x_0, then both P_n(x_0) = 0 and P_n'(x_0) = 0, implying orthogonality of P_n to lower-degree polynomials including constants, which contradicts the degree unless the measure is degenerate. The zeros of consecutive orthogonal polynomials exhibit an interlacing property: the n zeros of P_n, denoted x_{n,1} < x_{n,2} < \cdots < x_{n,n}, satisfy a < x_{n,1} < x_{n-1,1} < x_{n,2} < x_{n-1,2} < \cdots < x_{n,n-1} < x_{n-1,n-1} < x_{n,n} < b, with exactly one zero of P_{n-1} between consecutive zeros of P_n. This interlacing follows from the three-term recurrence relation and the positivity of the recurrence coefficients, ensuring no common zeros and the Sturm separation property. Oscillation theorems, rooted in Sturm's comparison theory, further characterize this behavior: the sequence \{P_n\} forms a Sturm sequence, where the number of sign changes in the sequence P_0(x), P_1(x), \dots, P_n(x) at a fixed x equals the number of zeros of P_n to the right of x, providing a variational characterization of the zero locations. These zero properties underpin Gaussian quadrature rules for numerical integration with respect to \mu. The n-point Gauss quadrature formula approximates \int_a^b f(x) \, d\mu(x) \approx \sum_{i=1}^n w_i f(x_{n,i}), where the nodes x_{n,i} are the zeros of P_n, and the positive weights w_i are given by w_i = \left( \sum_{k=0}^{n-1} p_k(x_{n,i})^2 \right)^{-1} for the orthonormal system \{p_k\} (with P_n = h_n^{1/2} p_n, h_n = \int p_n^2 \, d\mu). This formula is exact for all polynomials f of degree at most $2n-1, as the error term vanishes due to orthogonality: the interpolating polynomial at the nodes matches f up to degree n-1, and the remainder is orthogonal to polynomials of degree less than n. The weights can also be derived using the for the reproducing kernel. Asymptotically, as the degree n \to \infty, the empirical measure of the zeros of P_n, \frac{1}{n} \sum_{i=1}^n \delta_{x_{n,i}}, converges weakly to the equilibrium measure \mu_{eq} on the support of \mu, which minimizes the logarithmic energy functional \iint \log \frac{1}{|x-y|} \, d\nu(x) d\nu(y) among probability measures on (a,b). This distribution arises from and governs the large-scale spacing of zeros, with density \rho(x) = \frac{1}{\pi} \Im \left( \frac{1}{z - \phi(z)} \right) near the support, where \phi solves the associated extremal problem. For varying measures \mu_n depending on n, such as those in , the zero distribution follows laws like the for , reflecting the spectral density of the underlying operator. In the context of random orthogonal polynomials, where the measure or coefficients are randomized, recent results from the 2010s and 2020s show that local zero spacings exhibit universal repulsion and distribution statistics akin to those in random matrix theory. Specifically, the scaled nearest-neighbor spacings follow the from the , with connections to describing the diffusive evolution of zeros under logarithmic repulsion and a confining potential. These behaviors, established almost surely or in probability, highlight rigidity and universality in zero configurations for perturbed or stochastic orthogonal systems.

Combinatorial Interpretations

Orthogonal polynomials admit various combinatorial interpretations that connect their coefficients and generating functions to counting problems in lattice paths, tilings, permutations, and design structures. These interpretations often arise through determinants or expansions that enumerate non-intersecting configurations or weighted objects, providing bijective proofs for identities in the polynomials. The Askey-Wilson polynomials serve as q-analogs of classical orthogonal polynomials, expressed through basic hypergeometric series of the form {}_4\phi_3, which encode combinatorial sums over paths or partitions in q-deformed settings. Their connections to quantum groups, such as the quantum SU(2), interpret the polynomials as generalized matrix elements of irreducible representations, linking to combinatorial models in representation theory and quantum algebras. These q-hypergeometric structures facilitate interpretations in terms of weighted lattice paths and basic hypergeometric identities, extending classical counting to deformed weights. The Karlin-McGregor formula expresses the probability of non-intersecting paths for random walks or Brownian motions as a determinant involving the kernels of , providing a combinatorial count of such configurations. In discrete settings, this determinant equals the number of tilings, such as Aztec diamond domino tilings enumerated by $2^{n(n+1)/2} via , or abc-hexagon rhombus tilings counted by MacMahon's formula using . These links extend to matchings in dimer models on lattices, where the polynomials' zeros briefly relate to path non-intersections in asymptotic limits. Hahn polynomials appear in association schemes as the eigenvalues for Q-polynomial structures, where their coefficients correspond to intersection numbers p_{ij}^k counting common neighbors between vertices at distances i and j. In design theory, these polynomials model t-designs on finite geometries, with intersection numbers quantifying block overlaps in balanced incomplete block designs derived from symmetric schemes. This combinatorial framework interprets the polynomials' expansions as enumerators of orbital counts in group actions on sets. Generating functions for rook polynomials, which count non-attacking rook placements on boards, relate directly to orthogonal polynomials like Laguerre and Charlier through umbral calculus, where the exponential generating function \Phi maps products of these polynomials to permutation enumerations with cycle or color restrictions. For instance, the simple Laguerre polynomials yield counts of permutations of multiset objects, while generalized versions incorporate weights for derangements or fixed-point-free matchings via cycle indices. These expansions provide bijective interpretations for linearization coefficients in orthogonal polynomial products. Charlier polynomials, as discrete analogs, have combinatorial interpretations via weighted partial permutations and set partitions, where their explicit sum C_n(x; a) = \sum (-1)^{n-k} a^{n-k} \stir{n}{k} x^k counts involutions or restricted growth functions with inversion statistics. In birth-death processes, they model Poisson-driven queueing systems, with moments as generating functions for partitions into blocks weighted by queue lengths or arrival counts. The q-Charlier extension incorporates q-inversions for path counting in deformed queueing combinatorics. Modern developments from the 1990s to 2020s link orthogonal polynomials to integrable systems via the Bethe ansatz, with combinatorial interpretations through rigged configurations bijected to highest-weight lattice paths in affine Lie algebras. The combinatorial Bethe ansatz, as in Kerov-Kirillov-Reshetikhin bijections, counts fermionic formulas akin to Kostka numbers, connecting to ultradiscrete systems like the box-ball model for soliton dynamics and non-intersecting path evolutions. These ties extend orthogonal polynomial identities to enumeration in vertex models and tropical geometries.

Applications

Approximation and Interpolation Theory

Orthogonal polynomials provide a fundamental framework for expanding functions in series analogous to Fourier series, but adapted to general measures on the real line or intervals. For a function f \in L^2(\mu), where \mu is a positive measure with finite moments, the orthogonal polynomial expansion takes the form f(x) = \sum_{n=0}^\infty \frac{\langle f, P_n \rangle}{h_n} P_n(x), with coefficients \langle f, P_n \rangle = \int f(x) P_n(x) \, d\mu(x) and norms h_n = \langle P_n, P_n \rangle. This series converges to f in the L^2(\mu) norm, as the polynomials form a complete orthonormal basis in L^2(\mu). In uniform approximation on compact intervals, Chebyshev polynomials of the first kind, T_n(x), achieve the best approximation in the maximum norm among monic polynomials of degree n. Specifically, the scaled Chebyshev polynomial \frac{1}{2^{n-1}} T_n(x) minimizes \|p\|_\infty over all monic polynomials p of degree n on [-1,1]. This minimax property follows from the equioscillation theorem, which states that the error f - p^* for the best uniform approximation p^* of degree at most n attains its maximum magnitude at least n+2 times with alternating signs. Interpolation using orthogonal polynomials often employs the zeros of the polynomial P_{n+1} as nodes, forming a Gaussian quadrature-related scheme. The Lagrange basis polynomials at these zeros can be expressed via the Christoffel-Darboux kernel K_n(x,y) = \sum_{k=0}^n \frac{P_k(x) P_k(y)}{h_k}, yielding the interpolant L_n f(x) = \sum_y f(y) \frac{K_n(x,y)}{K_n(y,y)}, where the sum is over the nodes y. Error estimates for this interpolation satisfy \|f - L_n f\|_{L^2(\mu)} \leq C \|f^{(n+1)}\|_{L^2(\mu)} / (n+1)!, with constants depending on the measure, improving over equidistant nodes by avoiding Runge phenomenon. The Jackson-Bernstein theorems quantify approximation rates by polynomials of degree n. Jackson's theorem bounds the best uniform approximation error E_n(f) by E_n(f) \leq C \omega_f(1/n), where \omega_f(\delta) is the modulus of continuity of f on [-1,1], with C independent of f and n. The converse Bernstein theorem states that if E_n(f) = O(1/n^k), then f is k times Lipschitz continuous. These results extend to weighted settings with , providing rates tied to smoothness in L^2(\mu). Christoffel functions address non-uniform weights in approximation, defined as \lambda_n(x) = \left( \sum_{k=0}^n \frac{[P_k(x)]^2}{h_k} \right)^{-1}, representing the local density of the measure \mu at x. These functions minimize \int |p(y)|^2 \, d\mu(y) over polynomials p of degree at most n with p(x) = 1, and their asymptotics reveal the equilibrium measure in potential theory for large n. In the 2010s, orthogonal polynomial expansions gained traction in spectral methods for solving partial differential equations, where expansions in Legendre or Chebyshev bases enable high-order accurate approximations with exponential convergence for smooth functions. Similarly, in neural networks, orthogonal polynomial expansions augment random vector functional link networks, improving generalization by enforcing orthogonality in feature representations and reducing overfitting in high-dimensional approximations.

Physics and Quantum Mechanics

Orthogonal polynomials play a central role in quantum mechanics, particularly in solving the Schrödinger equation for systems with specific potentials, where they appear as components of the eigenfunctions. These polynomials facilitate exact solutions for bound states in one-dimensional and central potentials, enabling the computation of energy levels and wave functions. In addition, their zeros and asymptotic properties model complex phenomena such as eigenvalue distributions in quantum systems. In the quantum harmonic oscillator, the eigenfunctions are expressed using Hermite polynomials. The wave functions take the form \psi_n(x) \propto H_n(\sqrt{m\omega/\hbar} x) e^{-m\omega x^2 / 2\hbar}, where H_n denotes the nth , and the corresponding energy eigenvalues are E_n = \hbar \omega (n + 1/2) for n = 0, 1, 2, \dots. This structure arises from solving the time-independent Schrödinger equation for the quadratic potential V(x) = \frac{1}{2} m \omega^2 x^2, with the Hermite polynomials ensuring orthogonality under the weight e^{-x^2}. The recurrence relations of these polynomials also connect to ladder operators that raise or lower the quantum number n by unity. For the hydrogen atom, Laguerre polynomials determine the radial part of the wave functions in spherical coordinates. The radial function is R_{nl}(r) \propto \left( \frac{2r}{n a_0} \right)^l L_{n-l-1}^{2l+1} \left( \frac{2r}{n a_0} \right) e^{-r / n a_0}, where L_k^\alpha is the associated , n is the principal quantum number, l is the orbital quantum number, and a_0 is the . This form satisfies the radial for the Coulomb potential V(r) = -e^2 / r, yielding discrete energy levels E_n = -13.6 \, \text{eV} / n^2. The orthogonality of the with respect to the weight x^\alpha e^{-x} ensures the overall wave functions are orthogonal. Associated Legendre functions, a class of orthogonal polynomials, underpin the angular dependence in quantum systems with rotational symmetry, such as angular momentum operators. In spherical harmonics Y_l^m(\theta, \phi) \propto P_l^m(\cos \theta) e^{i m \phi}, the P_l^m are associated Legendre functions that serve as eigenfunctions of the L^2 operator with eigenvalues \hbar^2 l(l+1) and L_z with eigenvalues \hbar m, where l = 0, 1, \dots and m = -l, \dots, l. These harmonics diagonalize the angular part of the Schrödinger equation for central potentials, including the hydrogen atom, and their orthogonality over the sphere integrates to \delta_{ll'} \delta_{mm'}. In random matrix theory, the zeros of orthogonal polynomials model the distribution of eigenvalues in quantum chaotic systems, particularly for Gaussian ensembles introduced by Dyson. For the Gaussian Orthogonal Ensemble (GOE), the joint probability density of eigenvalues is proportional to \prod_{i<j} | \lambda_i - \lambda_j | \exp( - \sum_i \lambda_i^2 / 2 ), where the Vandermonde determinant relates to the discriminant of , and the average density of states corresponds to the equilibrium measure supported on the semicircle. This framework, originating from Dyson's classification of symmetry classes in quantum mechanics, links spectral statistics to universal behaviors in disordered systems like heavy nuclei. For time-dependent problems, generating functions of orthogonal polynomials construct coherent states, which minimize uncertainty and follow classical trajectories. In the harmonic oscillator, the coherent state |\alpha\rangle = e^{-|\alpha|^2/2} \sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}} |n\rangle expands using the generating function \sum_{n=0}^\infty \frac{H_n(x)}{n!} t^n = e^{2xt - t^2} for , enabling displacement operator representations D(\alpha) = e^{\alpha a^\dagger - \alpha^* a}. These states describe laser light in and evolve under the time-dependent without spreading. Recent applications extend to quantum computing, where orthogonal polynomials enhance variational quantum algorithms for solving eigenvalue problems. In the orthogonal-polynomial-based quantum reduced-order model (PolyQROM), basis transformations using polynomials like Legendre or Chebyshev improve ansatz expressivity in variational quantum eigensolvers, reducing circuit depth and mitigating noise in near-term devices for molecular simulations. This approach leverages polynomial orthogonality to project high-dimensional Hamiltonians onto lower-dimensional subspaces, achieving better convergence for ground-state energies.

Statistics and Probability

Orthogonal polynomials play a central role in statistics and probability by providing bases for expanding probability densities, moments, and stochastic processes in a manner that leverages their orthogonality properties to simplify computations and approximations. In Edgeworth expansions, are used to orthogonalize corrections to the standard normal density based on higher-order cumulants, enabling asymptotic approximations of distributions close to Gaussian. The expansion takes the form p(x) \approx \phi(x) \sum_{k=0}^{m} \frac{\kappa_{k+2}}{(k+2)!} \He_k(x), where \phi(x) is the standard normal density, \He_k(x) are the probabilists' orthogonal with respect to \phi(x), and \kappa_j are cumulants beyond the first two. This series refines the by incorporating skewness and kurtosis effects through the orthogonal structure, which ensures uncorrelated terms in the expansion. Charlier polynomials, orthogonal with respect to the Poisson measure, facilitate the analysis of point processes, particularly in computing moments and statistics for Poisson approximations. Their orthogonality allows for Parseval-type identities that bound distances like total variation between Poisson-binomial and Poisson distributions, with applications to thinned sums in point processes where moments are expressed via generating functions involving Charlier coefficients. For instance, the expected value E[C_j(\lambda, X)] for a random variable X under Poisson intensity \lambda simplifies moment calculations in nonstationary processes. Orthogonal polynomial chaos expansions (PCE) are widely employed for uncertainty quantification in stochastic partial differential equations (SPDEs), representing solutions as series in orthogonal polynomials adapted to the input probability measure, such as Wiener-Askey chaos. The solution u(\mathbf{x}, \boldsymbol{\xi}) is expanded as u(\mathbf{x}, \boldsymbol{\xi}) = \sum_{k=0}^{\infty} u_k(\mathbf{x}) \Psi_k(\boldsymbol{\xi}), where \{\Psi_k\} are multivariate orthogonal polynomials and \boldsymbol{\xi} are random inputs; this reduces the SPDE to a system of deterministic PDEs for coefficients u_k, enabling efficient propagation of uncertainties in parameters like forcing terms. Dynamical variants further handle long-time evolution by limiting expansion dimensions, with proven convergence for Markovian noise. Mehler-Heine asymptotics describe the local behavior of orthogonal polynomials near the edges of their orthogonality interval, scaling as n^{-\alpha} P_n^{(\alpha,\beta)}(\cos(z/n)) \to (z/2)^{-\alpha} J_{\alpha}(z) for involving the J_{\alpha}, with analogous forms for and multiple orthogonal cases using generalized hypergeometric functions. These asymptotics connect to local limit theorems in probability by characterizing zero distributions of polynomials, which model spacing in determinantal point processes and random matrices, providing precise local densities for lattice or continuous approximations in central limit settings. In Bayesian nonparametrics, orthogonal polynomial expansions serve as bases for prior distributions, including those constructed via Pólya trees, to achieve posterior consistency in density estimation. For a Pólya tree prior P on [0,1] with density f_P, the posterior consistency is established by expanding f_P in orthogonal polynomials \{\phi_j\} on [0,1], ensuring rates like O_p(n^{-1/2} (\log n)^{1/2}) under entropy conditions, which facilitates nonparametric inference for smooth densities. Applications in financial modeling include using orthogonal polynomial expansions to price options under stochastic volatility, capturing volatility smiles observed in market data from the 2000s onward. In polynomial stochastic volatility models, such as those where squared volatility follows a diffusion in orthogonal polynomials like or , European option prices admit series representations \pi_f = \sum_{n \geq 0} f_n \ell_n, with coefficients from Gaussian mixtures approximating the log-price density; this efficiently fits implied volatility surfaces and computes Greeks, outperforming Fourier methods in non-affine cases.

Numerical Methods

Numerical methods for orthogonal polynomials encompass a range of algorithms designed to compute and apply these polynomials efficiently in computational settings, leveraging their recursive structures and orthogonality properties for tasks such as integration and approximation. These techniques are essential in scientific computing, where direct evaluation or construction of high-degree polynomials can be numerically unstable due to ill-conditioning. Key approaches include specialized quadrature rules, transform algorithms, and evaluation schemes that exploit the three-term recurrence relation inherent to orthogonal polynomials. A prominent application is the implementation of Gaussian quadrature, which approximates integrals of the form \int_a^b w(x) f(x) \, dx using nodes and weights derived from orthogonal polynomials associated with the weight function w(x). The Golub-Welsch algorithm computes these nodes as the eigenvalues of the tridiagonal Jacobi matrix formed from the recurrence coefficients of the orthogonal polynomials, with weights obtained from the first components of the corresponding eigenvectors. This method, originally proposed in 1969, transforms the quadrature problem into a symmetric eigenvalue computation, enabling stable calculation for degrees up to several hundred using standard linear algebra routines like the QR algorithm. For instance, for Legendre polynomials on [-1, 1], the Jacobi matrix entries are derived from the recurrence (n+1) P_{n+1}(x) = (2n+1) x P_n(x) - n P_{n-1}(x), yielding nodes that interlace and provide exact integration for polynomials up to degree $2n-1. Modern variants accelerate this for symmetric tridiagonal matrices, reducing complexity to O(n^2) while maintaining backward stability. In spectral methods for partial differential equation (PDE) discretization, orthogonal polynomials facilitate high-order approximations by collocating solutions at specific points, such as the zeros of , which include the interval endpoints for boundary enforcement. These points, denoted x_k = \cos\left( \frac{k \pi}{N-1} \right) for k = 0, \dots, N-1 on [-1, 1], form a non-uniform grid that clusters near boundaries, improving resolution for problems with boundary layers. The method expands the solution as u(x) = \sum_{k=0}^N u_k \ell_k(x), where \ell_k(x) are Lagrange basis polynomials interpolated at these nodes, and differentiation matrices are precomputed for efficient time-stepping in hyperbolic or elliptic . This collocation approach achieves spectral accuracy, with errors decaying exponentially in the polynomial degree N, and is implemented in libraries for fluid dynamics simulations. The zeros of the polynomials serve as natural collocation nodes, ensuring orthogonality aids in stability analysis. Fast discrete orthogonal polynomial transforms provide an efficient means for series evaluation and coefficient computation, analogous to the fast (FFT) for trigonometric polynomials. These transforms convert between function values at discrete points and expansion coefficients in an orthogonal polynomial basis, achieving O(n \log n) complexity for n points through divide-and-conquer strategies exploiting the recurrence relation. For example, the fast cosine transform underlies Chebyshev expansions, enabling rapid evaluation of sums \sum_{k=0}^n c_k T_k(x) at Chebyshev points. Recent advancements include algorithms for arbitrary orthogonal families, such as Legendre or Jacobi, using butterfly structures to reduce operations from O(n^2) to near-linear, with applications in data analysis and image processing. These methods preserve numerical stability by avoiding explicit high-degree polynomial construction. Orthogonal polynomial regression employs these bases for stable least-squares fitting of data, where the orthogonality minimizes multicollinearity among basis functions, leading to uncorrelated coefficient estimates and reduced variance amplification. In practice, precomputed orthogonal polynomials, such as shifted Legendre on [0,1], are used to fit models f(x) \approx \sum_{k=0}^m c_k P_k(x) by solving the normal equations \mathbf{c} = (\mathbf{X}^T \mathbf{W} \mathbf{X})^{-1} \mathbf{X}^T \mathbf{W} \mathbf{y}, with \mathbf{W} diagonal for the weight. This approach enhances conditioning compared to monomial bases, particularly for high degrees, as the Gram matrix is nearly diagonal. Applications include curve fitting in experimental data, where sequential addition of terms via F-tests ensures parsimony. Error analysis in polynomial evaluation highlights the superior conditioning of recurrence-based methods over direct summation, particularly the Clenshaw algorithm, which recursively computes \sum_{k=0}^n c_k P_k(x) using the three-term relation to avoid explicit polynomial storage. For a fixed x, the Clenshaw method exhibits forward error bounded by O(n \epsilon \kappa), where \epsilon is machine precision and \kappa is a modest growth factor depending on the recurrence coefficients, outperforming Horner's method for orthogonal bases by reducing roundoff propagation. In contrast, naive recurrence evaluation from low to high degree can amplify errors exponentially for |x| near the interval endpoints due to subtractive cancellation. Rounding error bounds confirm the Clenshaw algorithm's componentwise backward stability for Chebyshev and other classical polynomials, with extensions to derivatives via the extended Clenshaw recurrence. Software libraries like Chebfun implement these numerical methods for high-precision computations with orthogonal polynomials, representing functions as adaptive Chebyshev expansions and supporting operations like quadrature and spectral differentiation to nearly 16-digit accuracy. Ongoing developments as of 2024 integrate these with parallel computing frameworks, though full GPU acceleration remains an active area for scaling to massive datasets in scientific simulations. Chebfun's constructor, for instance, uses discrete orthogonal transforms to build expansions from samples, ensuring robust handling of oscillatory functions.

Generalizations

Multivariate Orthogonal Polynomials

Multivariate orthogonal polynomials extend the classical theory to functions of several variables \mathbf{x} = (x_1, \dots, x_d) \in \mathbb{R}^d, where d \geq 2. They form a complete orthogonal basis in the Hilbert space L^2(\mathbb{R}^d, d\mu) with respect to a positive Borel measure \mu that is finite on compact sets and satisfies the Carleman condition to ensure moment determinacy. Specifically, these polynomials \{P_\alpha : \alpha \in \mathbb{N}_0^d\} are indexed by multi-indices \alpha = (\alpha_1, \dots, \alpha_d) with nonnegative integer components, satisfying the orthogonality relation \int_{\mathbb{R}^d} P_\alpha(\mathbf{x}) P_\beta(\mathbf{x}) \, d\mu(\mathbf{x}) = \delta_{\alpha\beta} h_\alpha, where \delta_{\alpha\beta} is the Kronecker delta, h_\alpha > 0, and each P_\alpha is monic of total degree |\alpha| = \sum_{i=1}^d \alpha_i. The space of all such polynomials is dense in L^2(\mathbb{R}^d, d\mu), and the dimension of the homogeneous subspace of total degree n is \binom{n + d - 1}{d - 1}. A common construction arises from tensor products of univariate orthogonal polynomials when the measure \mu is a product measure d\mu(\mathbf{x}) = \prod_{i=1}^d w(x_i) \, dx_i, where w is a univariate . In this separable case, the multivariate polynomials are given by P_\alpha(\mathbf{x}) = \prod_{i=1}^d p_{\alpha_i}(x_i), where \{p_k\} is the univariate family orthogonal with respect to w. For instance, the multivariate , orthogonal on \mathbb{R}^d with respect to the d\mu(\mathbf{x}) = (2\pi)^{-d/2} e^{-|\mathbf{x}|^2/2} \, d\mathbf{x}, are defined as H_\alpha(\mathbf{x}) = \prod_{i=1}^d H_{\alpha_i}(x_i), where H_k are the univariate probabilists' . This tensor product structure preserves key properties like three-term recurrence relations in each variable separately. For rotationally invariant measures, orthogonal invariants play a central role, yielding bases that respect symmetries such as those in spherical or disk geometries. A prominent example is the on the unit disk in \mathbb{R}^2, orthogonal with respect to the measure d\mu(r,\theta) = \lambda (1 - r^2)^{\lambda} r \, dr \, d\theta / \pi for \lambda > -1, where they form a complete basis indexed by radial degree n and azimuthal frequency m, with P_{n m}(r e^{i\theta}) = R_n^m(r) e^{i m \theta} and R_n^m the radial polynomials satisfying a analogous to the univariate case. These polynomials are invariant under rotations and are widely used in for wavefront analysis due to their completeness on the disk. Multivariate orthogonal polynomials are typically graded by total degree, partitioning the space into homogeneous components \Pi_d^n = \operatorname{span}\{ \mathbf{x}^\alpha : |\alpha| = n \}, which facilitates hierarchical approximations and error estimates in L^2-norm. Hyperbolic bases, adapted to hyperbolic cross truncations, provide efficient representations in high dimensions by selecting multi-indices \alpha such that |\alpha| (\log |\alpha|)^{d-1} \leq n, reducing the number of terms from exponential O(n^d) to near-linear O(n (\log n)^{d-1}) while retaining approximation rates for analytic functions. This structure underpins sparse polynomial expansions, mitigating the curse of dimensionality. The Askey scheme, which classifies univariate hypergeometric orthogonal polynomials through limiting relations, extends to multiple variables primarily via tensor products or symmetry-adapted constructions on domains like balls and simplices. For example, multivariate Hahn and Wilson polynomials generalize discrete families in the scheme, satisfying multivariable difference equations and maintaining hypergeometric representations. These extensions preserve branching rules and q-analogues, linking to quantum groups and special functions. Recent progress in the has focused on sparse grid approximations using multivariate orthogonal polynomials for high-dimensional problems, enabling efficient and Galerkin methods. Techniques like Smolyak's algorithm combine tensor products with hyperbolic cross index sets to achieve convergence rates independent of dimension for functions with bounded mixed derivatives, as demonstrated in implementations for and PDE solving. As of 2023, proceedings from special sessions highlight ongoing trends in approximation theory and analytic properties.

Multiple and Sobolev Orthogonal Polynomials

Multiple orthogonal polynomials generalize by satisfying orthogonality conditions with respect to several measures simultaneously. In the type I case, they consist of vectors of polynomials (A_0(x), \dots, A_{r-1}(x)) of prescribed degrees that are orthogonal to the span of all monomials of lower total degree with respect to each of r measures \mu_1, \dots, \mu_r. For type II multiple orthogonal polynomials, a P_n(x) of degree |n| = \sum n_j is orthogonal to polynomials of degree at most n_j - 1 with respect to the j-th measure \mu_j, for j = 1, \dots, r. These polynomials arise in the context of simultaneous , where they relate to Hermite-Padé approximants for approximating multiple functions by rational functions. Unlike standard orthogonal polynomials, multiple orthogonal polynomials satisfy higher-order recurrence relations, typically (r+2)-term linear recurrences of the form x P_n(x) = \sum_{k=n-r}^{n+1} \pi_{n,k} P_k(x), where the coefficients \pi_{n,k} depend on the multi-index n. Their zeros are real and simple when the measures are supported on the real line, exhibiting interlacing properties across different multi-indices, and can be characterized as eigenvalues of leading principal submatrices of the recurrence matrix. Nonlinear recurrences and algebraic singularities in the moment matrices can occur for certain multi-indices, leading to non-perfect systems where some polynomials fail to exist. In random matrix theory, multiple orthogonal polynomials have been instrumental in modeling eigenvalue distributions and zero spacings since the early 2000s. They describe determinantal point processes for ensembles like random matrices with external sources, where the Christoffel-Darboux kernel for multiple captures eigenvalue repulsion and spacing statistics. Key advances include the derivation of Christoffel-Darboux formulas in 2004 and integral representations for multiple Hermite and in 2005, with ongoing developments through the 2020s linking them to non-intersecting paths and universality in zero spacing, including hypergeometric multiple orthogonal polynomials on the stepline related to Markov chains beyond birth-and-death processes as of 2025. Sobolev orthogonal polynomials are defined with respect to an inner product that incorporates , typically \langle f, g \rangle_S = \sum_{k=0}^m \int f^{(k)}(x) g^{(k)}(x) w(x) \, dx, where w(x) is a positive and m \geq 1, generalizing in Sobolev spaces H^m. More generally, the inner product may involve multiple measures: \langle f, g \rangle = \int f(x)g(x) \, d\mu_0(x) + \sum_{k=1}^m \int f^{(k)}(x) g^{(k)}(x) \, d\mu_k(x). These polynomials do not satisfy a standard three-term recurrence due to the non-symmetric structure but instead obey higher-order or recurrence relations derived via . Their zeros may be real and simple for small perturbation parameters but can become complex or multiple as the derivative orders increase. Applications of Sobolev orthogonal polynomials include the study of indeterminate moment problems, where the orthogonality in Sobolev spaces helps characterize non-unique measures with the same moments up to higher derivatives. They also arise in numerical methods for boundary value problems in partial differential equations, providing better approximation properties in Sobolev norms. An explicit example is the Hermite-Sobolev polynomials on (-\infty, \infty), orthogonal with respect to \langle f, g \rangle_H = \sum_{k=0}^m \lambda_k \int_{-\infty}^{\infty} f^{(k)}(x) g^{(k)}(x) e^{-x^2} \, dx, where \lambda_k > 0; these reduce to classical Hermite polynomials when m=0 and satisfy differential relations like H_n'(x) = 2n H_{n-1}(x). The theory was pioneered in the 1960s by Althammer for Sobolev-Legendre polynomials and revived in the 1990s through coherent pairs by Iserles et al.. Recent advances as of 2023-2025 include numerical procedures for generating Sobolev orthonormal polynomials via Hessenberg inverse eigenvalue problems and studies on Krein-Sobolev polynomials.

Matrix-Valued and Quantum Orthogonal Polynomials

Matrix-valued orthogonal polynomials generalize the scalar case by considering sequences of polynomials P_n(x) where each P_n(x) is a matrix-valued , typically of fixed p \times p, orthogonal with respect to a positive definite matrix-valued measure dM(x) on the real line. The inner product is defined as \langle P, Q \rangle = \int P(x)^* \, dM(x) \, Q(x), where ^* denotes the (Hermitian ), ensuring \langle P_m, P_n \rangle = 0 for m \neq n and for the norms. These polynomials arise in contexts such as multivariate analysis, random matrix theory, and representations of Lie groups like SU(2), where the weight matrix M(x) encodes multi-dimensional structure. Seminal developments include the extension of Favard's theorem to the matrix setting, confirming that such sequences are uniquely determined by their moments or recurrences. A key property is the matrix three-term recurrence relation satisfied by monic matrix-valued orthogonal polynomials: x P_n(x) = P_{n+1}(x) + B_n P_n(x) + C_n P_{n-1}(x), where B_n and C_n are Hermitian p \times p matrices, and the leading coefficient matrix for the scalar multiple in the recurrence is often the identity scaled by a positive scalar to preserve orthogonality. This recurrence mirrors the scalar case but involves non-commuting matrix coefficients, leading to richer , such as connections to block Jacobi matrices whose eigenvalues relate to the polynomials' . Regarding zeros, the eigenvalues of P_n(\lambda) = 0 (i.e., the roots in the matrix sense) exhibit interlacing properties: the eigenvalues of P_n(x) interlace those of P_{n-1}(x), generalizing Blumenthal's , with all eigenvalues real and simple under standard positivity conditions on the measure. This interlacing ensures the polynomials' eigenvalues form a chain suitable for and approximation in matrix settings. Recent research as of 2024-2025 includes matrix-valued polynomials from random walks and bispectral discrete constructions. Quantum orthogonal polynomials, or q-orthogonal polynomials, introduce a deformation parameter q (with $0 < |q| < 1) to classical families, replacing ordinary differences and integrals with q-analogs, often expressed via basic hypergeometric series {}_r\phi_s. The Al-Salam–Chihara polynomials Q_n(x; a, b; q), for instance, form a two-parameter q-analog of the Wilson polynomials, satisfying orthogonality on the real line with respect to a weight involving q-shifted factorials, and obey q-hypergeometric relations such as Q_n(x; a, b; q) = {}_3\phi_2(q^{-n}, a e^{i\theta}, a e^{-i\theta}; b, 0; q, q), where x = \cos \theta. These polynomials capture deformed symmetries in quantum mechanics and appear in representations of q-deformed algebras. The Askey–Wilson polynomials stand as the most general q-analog in the Askey scheme, encompassing all other q-families as limits, with explicit form p_n(\cos \theta | a,b,c,d | q) = {}_4\phi_3(\dots ; q, e^{i\theta}), orthogonal on [-1,1] with a weight product of (e^{i\theta} - a_i)(e^{-i\theta} - a_i^{-1}) terms; they possess quantum group symmetry, realizable as zonal spherical functions on the quantum SU_q(2) group. Recent applications of q-orthogonal polynomials extend to quantum integrable systems, particularly in solvable discrete quantum mechanics models post-2010, where they form the main components of eigenfunctions for shape-invariant potentials deformed by q, such as q-Racah and continuous q-Hahn families, facilitating exact solutions via quantum algebra techniques. These deformations preserve orthogonality and recurrence properties while incorporating quantum dilogarithms for normalization, bridging special functions with integrable hierarchies like the q-Toda lattice. As of 2025, advances include their use in entanglement Hamiltonians for free-fermion chains and for multivariate polynomials.

Skew-Orthogonal Polynomials

Skew-orthogonal polynomials form a sequence of polynomials \{p_n(x)\}_{n=0}^\infty defined with respect to a skew-symmetric inner product \langle f, g \rangle = -\langle g, f \rangle on a suitable space of functions, such as polynomials over the real line or with a w(x). Unlike standard orthogonal polynomials, the skew-inner product vanishes for pairs of polynomials of the same (both even or both odd degree), leading to conditions \langle p_m, p_n \rangle = 0 for m < n with m + n even, while \langle p_m, p_n \rangle = s_n \neq 0 for m = n even and zero for odd degrees. A common form of the skew-inner product is \langle f, g \rangle = \int (f(x)g(-x) - f(-x)g(x)) w(x) \, dx, which ensures antisymmetry and is often normalized by a factor of $1/2. This structure arises naturally in antisymmetric settings, where the polynomials are typically constructed separately for even and odd degrees to handle the degeneracy. Construction of skew-orthogonal polynomials can be achieved through a skew-symmetric analogue of the Gram-Schmidt process applied to a basis like monomials, or more explicitly via Pfaffians of the moment matrix associated with the skew-inner product. The moment matrix M has entries M_{jk} = \langle x^j, x^k \rangle, which is skew-symmetric, and the monic skew-orthogonal polynomial of degree n is given by p_n(x) = \frac{1}{\tau_{n-1}} \operatorname{Pf}(M_{n,x}), where \tau_{n-1} = \operatorname{Pf}(M_n) is the Pfaffian of the leading n \times n principal submatrix, and M_{n,x} replaces the last row and column with moments involving x^n. For classical weights, explicit expressions can be derived in terms of standard orthogonal polynomials; for instance, even-degree polynomials are often quadratic transformations of orthogonal ones. This Pfaffian-based method unifies the construction across weights like Gaussian or Laguerre, ensuring the polynomials satisfy three-term recurrence relations adapted for the skew symmetry. In random matrix theory, skew-orthogonal polynomials play a central role in describing the eigenvalue statistics of symplectic ensembles (β=4), such as the Gaussian Symplectic Ensemble (GSE), where correlations are governed by Pfaffian point processes rather than determinants. The n-point correlation functions are Pfaffians of a 2n × 2n skew-symmetric kernel constructed from the skew-orthogonal polynomials and their derivatives, specifically K(x_i, x_j) = \begin{pmatrix} S(x_i, x_j) & D(x_i, x_j) \\ -D(x_j, x_i) & -S(x_j, x_i) \end{pmatrix}, with S and D involving integrals of p_k(x_i) p_k'(x_j) w(x_j). This framework captures the repulsion and pairing effects in symplectic settings, contrasting with the unitary case (β=2). A representative example is the skew-Hermite polynomials, associated with the Gaussian weight w(x) = e^{-x^2/2} on \mathbb{R}, relevant to the GSE. These polynomials satisfy the skew-inner product \langle f, g \rangle_H = \int_{-\infty}^\infty [f(x)g(-x) - f(-x)g(x)] e^{-x^2/2} \, dx, and explicit forms are p_{2n}(x) = H_n(x^2) - c_n H_{n-1}(x^2) for even degrees (up to scaling), where H_n are standard , while odd-degree ones involve linear terms in x times even functions. The skew-norms s_{2n} = \langle p_{2n}, p_{2n} \rangle_H are positive and grow factorially, reflecting the underlying moment structure. Similar constructions exist for skew-Laguerre and skew-Jacobi polynomials, adapting to half-line or supports. The zeros of skew-orthogonal polynomials exhibit pairing due to the antisymmetric product, often appearing in conjugate pairs or symmetric with respect to the , which aligns with the double degeneracy of eigenvalues in ensembles. Asymptotically, for large degrees, the roots distribute according to the equilibrium measure of the weight, with edge behaviors described by in the RMT context, such as the Hastings-McLeod solution for the GSE bulk scaling. This pairing leads to enhanced level repulsion at short distances compared to orthogonal cases. Recent developments as of 2023-2025 include Christoffel transformations for partial-skew-orthogonal polynomials and skew-odd orthogonal characters via vertex operators.

References

  1. [1]
    [PDF] MOPS: Multivariate Orthogonal Polynomials (symbolically)
    Sep 24, 2004 · We recall that scalar orthogonal polynomials are defined by a positive weight function w(x) defined on an interval I ⊂ R. We define the ...
  2. [2]
    [PDF] ORTHOGONAL POLYNOMIALS - OSU Math
    Feb 6, 2014 · Orthogonal polynomials are connected with trigonometric, hypergeometric,. Bessel, and elliptic functions, are related to the theory of continued ...
  3. [3]
    Orthogonal Polynomial - an overview | ScienceDirect Topics
    Orthogonal polynomials are defined as a class of polynomials that satisfy an orthogonality condition with respect to a weight function over a specified ...
  4. [4]
    Recent Developments in Orthogonal Polynomials
    Orthogonal polynomials are classical objects with important connections to many ar- eas of mathematics, such as approximation theory, integrable systems, and ...
  5. [5]
    [PDF] a study of legendre polynomials and its generalizations - CORE
    Jun 1, 2020 · It includes general class of triple generating functions for polynomials ... Laplace and Legendre during 1782-1785. The golden age of special ...
  6. [6]
    Legendre polynomials - Citizendium
    Sep 11, 2024 · Legendre himself declares that Laplace introduced the potential (i.e., generating) function, but that he himself developed the expansion. Later ...Historical note · Generating function · Normalization · Properties of Legendre...<|separator|>
  7. [7]
    [PDF] Euler and the Legendre Polynomials - arXiv
    Aug 30, 2023 · they were introduced by Legendre in [11] in his studies of the gravitational potential after Euler's death. Therefore, it is even more ...
  8. [8]
    [PDF] Orthogonal polynomials: applications and computation
    The subject of orthogonal polynomials, if not in name then in substance, is quite old, having its origin in the 19th-century theories of continued fractions and ...
  9. [9]
    [PDF] FREDHOLM, HILBERT, SCHMIDT Three Fundamental Papers on ...
    Dec 15, 2011 · The papers by Fredholm, Hilbert, and Schmidt advanced integral equations, focusing on Fredholm equations of the second kind, and moving from ...
  10. [10]
    [PDF] G. Polya - G.Szego Problems and Theorems in Analysis I - hlevkin
    The present English edition is not a mere translation of the German original. Many new problems have been added and there are also.
  11. [11]
    Orthogonal Polynomials - AMS Bookstore
    Chapters · Chapter 1. Preliminaries · Chapter 2. Definition of orthogonal polynomials; principal examples · Chapter 3. General properties of orthogonal polynomials.
  12. [12]
    On Koornwinder classical orthogonal polynomials in two variables
    In this paper, Koornwinder shows a very interesting tool to construct orthogonal polynomials in two variables from orthogonal polynomials in one variable. With ...
  13. [13]
    [PDF] Ambikeshwar Sharma (1920–2003) - History of Approximation Theory
    Mathur before leaving. India. Sharma worked in classical analysis, concentrating eventually on lacunary polynomial and trigonometric interpolation, and on ...
  14. [14]
    18.2 General Orthogonal Polynomials
    All n zeros of an OP p n ⁡ ( x ) are simple, and they are located in the interval of orthogonality ( a , b ) . The zeros of p n ⁡ ( x ) and p n + 1 ⁡ ( x ) ...
  15. [15]
    [PDF] ORTHOGONAL POLYNOMIALS The link between random matrix ...
    Note: (1) The fact that the orthogonal polynomials pn(x) have real coefficients implies that in inner products involving these polynomials, no complex ...
  16. [16]
    [PDF] Orthogonal Polynomials
    Nov 17, 2016 · Orthogonal polynomials are a sequence of polynomials where each is of degree i, and the inner product of any two different polynomials is zero.
  17. [17]
    [PDF] Orthogonal polynomials, a short introduction - arXiv
    Nov 11, 2021 · Orthogonal polynomials are a system of polynomials obtained by orthogonalizing monomials with respect to an inner product, and are a system of  ...
  18. [18]
    [PDF] Numerical Analysis – Lecture 31 2 Orthogonal polynomials - DAMTP
    Theorem For every n ≥ 0 there exists a unique monic orthogonal polynomial pn of degree n. Proof. We let p0(x) ≡ 1 and prove the theorem by induction on n. Thus, ...
  19. [19]
    [PDF] Orthogonal Polynomials - arXiv
    Dec 18, 2005 · In particular, M(0, 1/2) contains discrete measures, continuously singular measures or measures that are given by a continuous density which is.
  20. [20]
    Orthogonal Polynomials for a Class of Measures with Discrete ...
    Sep 26, 2016 · Our asymptotic analysis is obtained by reducing the planar orthogonality conditions of the polynomials to equivalent contour integral ...Missing: singular | Show results with:singular
  21. [21]
    Orthogonalization and Orthogonal Polynomials | SpringerLink
    The well-known Gram—Schmidt process is of this kind. This is also the central idea in the analysis of systems of orthogonal polynomial functions. This ...
  22. [22]
    Orthogonal polynomials: applications and computation
    Nov 7, 2008 · The basic task is to compute the coefficients in the three-term recurrence relation for the orthogonal polynomials. This can be done by methods ...<|separator|>
  23. [23]
    On the “Favard theorem” and its extensions - ScienceDirect
    In this paper we present a survey on the “Favard theorem” and its extensions. ... Favard theorem for Sobolev-type orthogonal polynomials. First of all, we ...
  24. [24]
    18.12 Generating Functions ‣ Classical Orthogonal Polynomials ...
    The z-radii of convergence will depend on x, and in first instance we will assume x ∈ [-1,1] for Jacobi, ultraspherical, Chebyshev and Legendre.
  25. [25]
    On Generating Orthogonal Polynomials - SIAM Publications Library
    We consider the problem of numerically generating the recursion coefficients of orthogonal polynomials, given an arbitrary weight distribution.
  26. [26]
    The stable computation of formal orthogonal polynomials
    We present a new variant of the Cabay-Meleshko algorithm for numerically computing pairs of basis polynomials, where the numerical orthogonality is explicitly ...
  27. [27]
    A stable stieltjes technique for computing orthogonal polynomials ...
    May 16, 1995 · This technique allows for the stable determination of large-rank matrices, a task for which the conventional approach, classical polynomial ...
  28. [28]
    Fast algorithms using orthogonal polynomials | Acta Numerica
    Nov 30, 2020 · We review recent advances in algorithms for quadrature, transforms, differential equations and singular integral equations using orthogonal polynomials.
  29. [29]
    DLMF: §18.3 Definitions ‣ Classical Orthogonal Polynomials ...
    Legendre polynomials are special cases of Legendre functions, Ferrers functions, and associated Legendre functions (§14.7(i)). In consequence, additional ...
  30. [30]
    Legendre Polynomial -- from Wolfram MathWorld
    The Legendre polynomials, sometimes called Legendre functions of the first kind, Legendre coefficients, or zonal harmonics (Whittaker and Watson 1990, p. 302),
  31. [31]
    Hermite Polynomial -- from Wolfram MathWorld
    The Hermite polynomials H_n(x) are set of orthogonal polynomials over the domain (-infty,infty) with weighting function e^(-x^2)Missing: Rodrigues | Show results with:Rodrigues
  32. [32]
    Laguerre Polynomial -- from Wolfram MathWorld
    The Rodrigues representation for the Laguerre polynomials is. L_n(x)=(e^x)/(n!)(. (6). and the generating function for Laguerre polynomials is. g(x,z), = (exp ...Missing: probabilist | Show results with:probabilist<|separator|>
  33. [33]
    DLMF: §18.5 Explicit Representations ‣ Classical Orthogonal ...
    The DLMF now adopts the definitions ... Legendre polynomials, Rodrigues formula, Rodrigues formulas, classical orthogonal polynomials, ultraspherical polynomials ...
  34. [34]
    [PDF] The Askey-scheme of hypergeometric orthogonal polynomials and ...
    We list the so-called Askey-scheme of hypergeometric orthogonal polynomials and we give a q- analogue of this scheme containing basic hypergeometric ...<|separator|>
  35. [35]
    18.19 Hahn Class: Definitions
    The Hahn class consists of four discrete families (Hahn, Krawtchouk, Meixner, and Charlier) and two continuous families (continuous Hahn and Meixner–Pollaczek).
  36. [36]
    Charlier Polynomial -- from Wolfram MathWorld
    Charlier Polynomial: The orthogonal polynomials defined by where (x)_n is the Pochhammer symbol (Koekoek and Swarttouw 1998).
  37. [37]
    DLMF: §18.27 𝑞-Hahn Class ‣ Other Orthogonal Polynomials ...
    Thus in addition to a relation of the form (18.27.2), such systems may also satisfy orthogonality relations with respect to a continuous weight function on some ...<|separator|>
  38. [38]
  39. [39]
  40. [40]
  41. [41]
  42. [42]
    [PDF] Christoffel transformations of matrix orthogonal polynomials - arXiv
    Christoffel transformation of the measure µ defined as above. plays an important role in the theory of orthogonal polynomials, due to its close relation with ...
  43. [43]
    [PDF] Properties of orthogonal polynomials - Blogs at Kent
    The sequence of polynomial is uniquely defined up to normalization. If hn = 1 for each n = 0, 1, 2,... the sequence of polynomials is called orthonormal.
  44. [44]
    [PDF] Connections between Lanczos Iteration and Orthogonal Polynomials
    We demonstrate how the Lanczos algorithm gives rise to a three-term recurrence, from which a family of orthogonal polynomials may be derived. We explore two ...
  45. [45]
    On the computation of recurrence coefficients for univariate ... - NIH
    Orthogonal polynomials as well as their recursion coefficients are expressible in determinantal form in terms of the moments of the underlying measure. Indeed, ...
  46. [46]
    [1401.6772] Global Asymptotics for the Christoffel-Darboux Kernel of ...
    Jan 27, 2014 · The investigation of universality questions for local eigenvalue statistics continues to be a driving force in the theory of Random Matrices. ...
  47. [47]
    [PDF] Gaussian Processes and Kernel Methods - arXiv
    Jul 6, 2018 · Reproducing kernel Hilbert spaces are defined as follows, where positive definite kernels serve as reproducing kernels. Definition 2.3 (RKHS) ...
  48. [48]
    [PDF] sturm oscillation and comparison theorems - Caltech
    Indeed, the Sturm oscillation theorems for difference equations written in terms of orthogonal poly- nomials are clearly related to Descartes' theorem on zeros ...
  49. [49]
    [PDF] Orthogonal Polynomials and Related Approximation Results - LSEC
    The three-term recurrence relation (3.7) is essential for deriving other properties of orthogonal polynomials.
  50. [50]
    Asymptotic zero distribution of random orthogonal polynomials
    In this case, we establish almost sure convergence of the normalized counting measure of the zeros to an appropriate limiting measure. Again, this is the best ...
  51. [51]
    Asymptotic zero distribution of random orthogonal polynomials - arXiv
    Jan 30, 2018 · ... measure of the zeros of H_n converges weakly in probability to the equilibrium measure of K. This is the best possible result, in the sense ...
  52. [52]
    Askey-Wilson polynomial - Scholarpedia
    Aug 1, 2012 · Askey-Wilson polynomial refers to a four-parameter family of q-hypergeometric orthogonal polynomials which contains all families of classical orthogonal ...Orthogonal polynomials · Askey-Wilson polynomials · Askey scheme...
  53. [53]
    [PDF] generalized rook polynomials and - Brandeis
    The motivating goal of this paper is to find analogous interpretations for other orthogonal polynomials, but we also discuss generalizations of rook polynomials.
  54. [54]
    Askey-Wilson polynomials and the quantum SU(2) group
    Generalised matrix elements of the irreducible representations of the quantum SU(2) group are defined using certain orthonormal bases of the representation.
  55. [55]
    Non-intersecting paths, random tilings and random matrices
    May 17, 2002 · The paper investigates measures from non-intersecting paths in tilings, which have the same structure as eigenvalue measures in random matrix ...Missing: matchings | Show results with:matchings
  56. [56]
  57. [57]
    Hahn Polynomials, Discrete Harmonics, and t-Designs - jstor
    We shall denote by p(i, j, s) the intersection numbers of the n -class association scheme (Xn, 0): for given (x, y) E O, the integer p (i, j, s) counts the ...
  58. [58]
    [PDF] Graphs and association schemes, algebra and geometry - Pure
    Jan 1, 1983 · It is Q-polynomial with Hahn polynomials, and P-polynomial with ... for the valencies and intersection numbers of the association scheme.
  59. [59]
    [PDF] ORTHOGONAL POLYNOMIALS AND COMBINATORICS D. Stanton ...
    Aug 3, 2000 · Orthogonal polynomials are used in distance regular graphs and enumeration, focusing on eigenvalues of distance regular graphs and generating ...
  60. [60]
    [PDF] THE COMBINATORICS OF q-CHARLIER POLYNOMIALS
    We describe various aspects of the Al-Salam-Carlitz q-Charlier polyno- mials. These include combinatorial descriptions of the moments, the orthogonality.
  61. [61]
    [PDF] The Formal Theory of Birth-and-Death Processes, Lattice Path ...
    Abstract: Classic works of Karlin-McGregor and Jones-Magnus have established a general correspondence between continuous-time birth-and-death processes and ...
  62. [62]
    [PDF] Combinatorics of Bethe ansatz and ultradiscrete integrable systems
    Bethe ansatz persists in a combinatorial setting. • Links the limits of soliton equations and solvable lattice models as an ultradiscrete integrable system. • ...Missing: orthogonal polynomials interpretations 2020s
  63. [63]
    [PDF] Introduction to Approximation Theory
    Introduction to Approximation Theory. Copyright © 1966 by McGraw-Hill, Inc ... 2 Orthogonal Systems of Polynomials, the recurrence relation, extremal.
  64. [64]
    [PDF] Walter_Gautschi_-_Orthogonal_Polynomials_ ...
    Jan 2, 2019 · ... Chebyshev algorithm. 76. 2.1.8 Finite expansions in ... orthogonal polynomials, but are of interest in applications, will now be presented.
  65. [65]
    [PDF] A Survey of Weighted Polynomial Approximation with ... - arXiv
    Jan 3, 2007 · We survey old and recent aspects of this topic, including the Bernstein problem, weighted Jackson and Bernstein Theorems, Markov–Bernstein and ...
  66. [66]
    [PDF] The Christoffel function: Applications, connections and extensions
    Mar 4, 2024 · We provide an introduction to the Christoffel function (CF), a well-known (and old) tool from the theory of approximation and orthogonal.
  67. [67]
    [PDF] spectral theory of orthogonal polynomials - Caltech
    During the past dozen years, a major focus of my research has been the spectral theory of orthogonal polynomials-both orthogonal polynomials on the real line ( ...
  68. [68]
    A comprehensive experimental evaluation of orthogonal polynomial ...
    The Orthogonal Polynomial Expanded Random Vector Functional Link Neural Network (OPE-RVFLNN) utilizes advantages from expansion of the input vector and random ...
  69. [69]
    A Laguerre Polynomial Orthogonality and the Hydrogen Atom - arXiv
    Nov 13, 2000 · The radial part of the wave function of an electron in a Coulomb potential is the product of a Laguerre polynomial and an exponential with the ...
  70. [70]
    Harmonic oscillator eigenfunction expansions, quantum dots, and ...
    Jul 27, 2009 · The Hermite polynomials also obey the recurrence formula. H n + 1 ( x ) ... We have studied approximation properties of Hermite functions and ...
  71. [71]
    [PDF] arXiv:1606.08125v1 [quant-ph] 27 Jun 2016
    Jun 27, 2016 · Explicit expressions for the eigenenergies and eigenfunctions in terms of generalized Hermite polynomials are presented.
  72. [72]
    [PDF] The Spherical Harmonics
    ℓ (θ, φ) = ℓ(ℓ + 1)Y m ℓ (θ, φ) . That is, the spherical harmonics are eigenfunctions of the differential operator L2, with corresponding eigenvalues ℓ(ℓ + 1), ...
  73. [73]
    Orthogonal polynomial ensembles in probability theory - Project Euclid
    N (t)) ∈ WN be Dyson's Brownian motion at time t as in Theorem 4.1. Then the Airy process may be introduced as the scaled limiting distribution of the ...
  74. [74]
    A generating function for Hermite polynomials in connection ... - arXiv
    Apr 7, 2010 · We have formulated a generating function for the Hermite polynomials by comparing two expressions of the same coherent states attached to planar ...
  75. [75]
    Approach of the Generating Functions to the Coherent States for ...
    We explain how these states come directly from the generating functions of the certain families of classical orthogonal polynomials without the complexity of ...Missing: mechanics | Show results with:mechanics
  76. [76]
    Orthogonal-Polynomial-Based Quantum Reduced-Order Model for ...
    Apr 30, 2025 · We propose an orthogonal-polynomial-based quantum reduced-order model (PolyQROM) that integrates orthogonal polynomial basis transformations with variational ...
  77. [77]
    [PDF] On the validity of the formal Edgeworth expansion for posterior ...
    Oct 5, 2017 · Edgeworth expansion relies on proper order for cumulants ... This expansion includes Hermite polynomials to order 5, as is found in the standard.
  78. [78]
    [PDF] A Charlier-Parseval approach to Poisson approximation and ... - arXiv
    Oct 27, 2008 · In particular, Jordan. [47] proved the orthogonality of the Charlier polynomials with respect to the Poisson measure, and con- sidered a formal ...
  79. [79]
    Dynamical Polynomial Chaos Expansions and Long Time Evolution ...
    Polynomial chaos expansions (PCE) allow us to propagate uncertainties in the coefficients of differential equations to the statistics of their solutions.<|separator|>
  80. [80]
    [PDF] Mehler-Heine asymptotics for multiple orthogonal polynomials - arXiv
    Mar 24, 2016 · Mehler-Heine asymptotics describe the behavior of orthogonal poly- nomials near the edges of the interval where the orthogonality measure is ...
  81. [81]
    THE CONSISTENCY OF POSTERIOR DISTRIBUTIONS IN ...
    Let P have the Polya tree prior. ´. n ns1 w x distribution on 0,1 that has a ... Let ␾ ⭈ be a sequence of orthogonal polynomials on 0, 1 . j j j js1 ...
  82. [82]
    Option Pricing with Orthogonal Polynomial Expansions - arXiv
    Nov 25, 2017 · Abstract:We derive analytic series representations for European option prices in polynomial stochastic volatility models.Missing: smile seminal
  83. [83]
    Calculation of Gauss Quadrature Rules*
    In this note, we shall give effective numerical algorithms which are based on de- termining the eigenvalues and the first component of the eigenvectors of a sym ...
  84. [84]
    [PDF] Fast variants of the Golub and Welsch algorithm for symmetric ...
    Mar 13, 2012 · In this paper, we investigate variants of the well-known Golub and Welsch algo- rithm for computing nodes and weights of Gaussian quadrature ...
  85. [85]
    [PDF] Chebyshev and Fourier Spectral Methods 2000
    A differential equation is a pseudospectral matrix in drag. The program structure of grids point/basisset/collocation matrix is as basic to life as cloud/rain/ ...
  86. [86]
    [PDF] Legendre–Gauss–Lobatto Pseudo–spectral Method for One ...
    Jan 1, 2015 · Abstract: In this paper, we present a Legendre pseudo–spectral method based on a Legendre–Gauss–Lobatto zeros with the aid of tensor product ...
  87. [87]
    Fast Discrete Polynomial Transforms with Applications to Data ...
    In this paper, we present fast algorithms for computing discrete orthogonal polynomial transforms.Missing: analog | Show results with:analog
  88. [88]
    Least squares orthogonal polynomial regression estimation for ...
    1. Polynomial regression function estimators in the case of fixed design observation points are widely applied in practice and are usually based on ...Missing: stability | Show results with:stability
  89. [89]
    "Polynomial Fitting" by R. Steven Turley - BYU ScholarsArchive
    Sep 1, 2018 · This article reviews the theory and some good practice for fitting polynomials to data. I show by theory and example why fitting using a basis of orthogonal ...<|separator|>
  90. [90]
    Rounding error bounds for the Clenshaw and Forsythe algorithms ...
    Rounding error bounds of the Forsythe and the Clenshaw–Smith algorithm for the evaluation of finite series of orthogonal polynomials are presented.
  91. [91]
    Error Analysis of Clenshaw's Algorithm for Evaluating Derivatives of ...
    A forward rounding error analysis is presented for the extended Clenshaw algorithm due to Skrzipek for evaluating the derivatives of a polynomial expanded.Missing: conditioning | Show results with:conditioning
  92. [92]
    [PDF] Chebfun Guide
    Chebfun is an open-source software for numerical computing with functions, extending MATLAB to functions and operators. A chebfun is a function of one variable ...Missing: GPU acceleration 2024 2025
  93. [93]
    [PDF] Orthogonal polynomials of several variables - arXiv
    May 4, 2021 · ... Gram–Schmidt orthogonalization process to generate a sequence of orthogonal polynomials. In contrast to d = 1, however, there is no obvious ...
  94. [94]
  95. [95]
    [PDF] arXiv:1810.12113v1 [math.NA] 26 Oct 2018
    Oct 26, 2018 · The assumption exploits product-type probability measures, facilitating construction of the space of multivariate orthogonal polynomials via the ...
  96. [96]
    [1809.04327] A quantum algebra approach to multivariate Askey ...
    The univariate Askey-Wilson polynomials [2] are orthogonal polynomials depending on four parameters a , b , c , d a b c d a,b,c,d and on a parameter q ...
  97. [97]
    [PDF] sparse, high-dimensional approximation from gradient-augmented ...
    Feb 21, 2019 · The concern of this paper is the approximation of a smooth, high-dimensional function f : (−1,1)d → R using multivariate polynomials. Recent ...
  98. [98]
    [PDF] arXiv:1602.00995v2 [math.NA] 13 Jul 2016
    Abstract. In this work, we discuss the problem of approximating a multivariate function by polynomials via l1 minimization method, using a random chosen ...
  99. [99]
    [PDF] Multiple orthogonal polynomials, d-orthogonal polynomials ... - arXiv
    Mar 21, 2022 · The goal of this paper is to point out, and then analyze in detail, an unexpected connection between multiple orthogonal polynomials and ...<|control11|><|separator|>
  100. [100]
    Multiple orthogonal polynomials - ScienceDirect.com
    Multiple orthogonal polynomials are intimately related to Hermite-Padé approximants and often they are also called Hermite-Padé polynomials.
  101. [101]
    [PDF] Orthogonal and multiple orthogonal polynomials, random matrices ...
    Jan 10, 2020 · Orthogonal polynomials and multiple orthogonal polynomials are interesting special functions because there is a beautiful theory for them, with.
  102. [102]
    [PDF] On Sobolev orthogonal polynomials
    The purpose of this paper is to provide an updated survey for the current state of the theory of Sobolev orthogonal polynomials. Several surveys on specific ...
  103. [103]
    [PDF] the analytic theory of matrix orthogonal polynomials - UT Math
    Contents. 1. Introduction. 2. 1.1. Introduction and Overview. 2. 1.2. Matrix-Valued Measures. 6. 1.3. Matrix Möbius Transformations.Missing: post- | Show results with:post-
  104. [104]
    Asymptotics of matrix valued orthogonal polynomials on [−1,1]
    Jun 15, 2023 · This paper analyzes the large degree asymptotic behavior of matrix valued orthogonal polynomials (MVOPs) on [-1,1], using Riemann-Hilbert ...Missing: seminal | Show results with:seminal
  105. [105]
    Askey–Wilson Polynomials as Zonal Spherical Functions on the (2 ...
    On the 𝑆 ⁢ 𝑈 ⁡ ( 2 ) quantum group the notion of (zonal) spherical element is generalized by considering left and right invariance in the infinitesimal ...<|separator|>
  106. [106]
    [PDF] Solvable Discrete Quantum Mechanics: q-Orthogonal Polynomials ...
    Jun 26, 2015 · The q-deformation of the continuous Hahn polynomial is known as the continuous q-. Hahn polynomial pn(η; a1,a2,a3,a4; q) (−1 <η< 1, 0 <q< 1) ...
  107. [107]
    Classical skew orthogonal polynomials and random matrices - arXiv
    Jun 28, 1999 · Skew orthogonal polynomials arise in the calculation of the n-point distribution function for the eigenvalues of ensembles of random matrices.
  108. [108]
    Skew-Orthogonal Polynomials in the Complex Plane and Their ...
    Oct 27, 2021 · These point processes are characterised by a matrix valued kernel of skew-orthogonal polynomials.
  109. [109]
    The Pfaff Lattice and Skew-orthogonal Polynomials - ResearchGate
    Aug 5, 2025 · The tau-functions for the system are shown to be pfaffians and the wave vectors skew-orthogonal polynomials; we give their explicit form in ...
  110. [110]
    [0802.2288] The Pfaff lattice on symplectic matrices - arXiv
    Feb 15, 2008 · The Pfaff lattice is an integrable system arising from the SR-group factorization in an analogous way to how the Toda lattice arises from the QR ...