Fact-checked by Grok 2 weeks ago

Orthogonal functions

Orthogonal functions are a fundamental concept in , particularly in and approximation theory, consisting of a set of functions that are pairwise orthogonal with respect to an inner product defined on a suitable , such that the inner product of any two distinct functions is zero. This orthogonality condition generalizes the notion of vectors from finite-dimensional spaces to infinite-dimensional Hilbert spaces, where the inner product is typically given by \langle f, g \rangle = \int_a^b f(x) \overline{g(x)} \, dx over an interval [a, b], or a real-valued variant \int_a^b f(x) g(x) \, dx = 0 for distinct non-zero functions f and g. A collection of functions \{\phi_n\} is said to be mutually orthogonal if \langle \phi_m, \phi_n \rangle = 0 for all m \neq n, and often normalized to form an orthonormal set where \langle \phi_n, \phi_n \rangle = 1 for each n. If the set is complete—meaning it spans the entire space—any in the space can be uniquely expanded as a \sum c_n \phi_n(x), with coefficients c_n = \langle f, \phi_n \rangle. Prominent examples include the \{\cos(n\pi x / L), \sin(n\pi x / L)\} on intervals like [-L, L] or [0, L], which satisfy orthogonality integrals yielding zero for distinct indices and positive norms for matching indices, as well as complex exponentials \{e^{i n x}\} on [-\pi, \pi]. Orthogonal functions underpin key techniques in analysis, such as expansions for representing periodic functions and solving boundary value problems in partial differential equations via . Other classical families, like Legendre, Hermite, and , provide orthogonal bases for expansions on specific intervals or weight functions, enabling efficient approximations in physics, engineering, and numerical methods. Properties such as , which equates the energy of a function to the sum of squared coefficients in its orthogonal expansion (\int |f(x)|^2 \, dx = \sum |c_n|^2), highlight their role in preserving norms and facilitating energy conservation in and .

Fundamentals

Definition of orthogonality

In finite-dimensional Euclidean spaces, two vectors \mathbf{u} and \mathbf{v} are orthogonal if their \mathbf{u} \cdot \mathbf{v} = 0, meaning they are and form a . This concept provides intuition for orthogonality as a measure of or non-overlap in . For functions, the notion extends analogously to infinite-dimensional spaces. Two functions f and g are orthogonal over an [a, b] if the \int_a^b f(x) g(x) \, dx = 0. This condition indicates that the functions do not "overlap" in a weighted average sense across the interval, generalizing the vector case without assuming finite dimensions. In broader mathematical frameworks, orthogonality is defined within inner product spaces, where two elements f and g (which may be functions or vectors) satisfy \langle f, g \rangle = 0. Geometrically, this implies between f and g is \pi/2 radians, as the cosine of the angle is \cos \theta = \frac{\langle f, g \rangle}{\|f\| \|g\|} = 0, preserving the perpendicularity interpretation from finite dimensions.

Inner products in function spaces

In function spaces, an inner product provides a that generalizes the from finite-dimensional vector spaces to infinite-dimensional settings, enabling the definition of , norms, and distances between functions. The most fundamental example is the space L^2(\Omega) of square-integrable functions over a domain \Omega, where the inner product measures the "overlap" between functions via integration. For real-valued functions in L^2([a, b]), the standard inner product is defined as \langle f, g \rangle = \int_a^b f(x) g(x) \, dx, where the integral exists and is finite for f, g \in L^2([a, b]). This form satisfies the axioms of an inner product: linearity in the second argument, \langle f, \alpha g + \beta h \rangle = \alpha \langle f, g \rangle + \beta \langle f, h \rangle for scalars \alpha, \beta; symmetry, \langle f, g \rangle = \langle g, f \rangle; and , \langle f, f \rangle \geq 0 with equality if and only if f = 0 . Common finite domains include [-1, 1], as used for , or [0, \pi] for . In the complex case, for functions in L^2(\Omega) with complex values, the inner product incorporates the complex conjugate to ensure conjugate symmetry: \langle f, g \rangle = \int_\Omega f(x) \overline{g(x)} \, dx. This adjustment preserves linearity in the second argument and conjugate symmetry, \langle g, f \rangle = \overline{\langle f, g \rangle}, while maintaining positive definiteness via \langle f, f \rangle = \int_\Omega |f(x)|^2 \, dx > 0 for f \not\equiv 0. For infinite domains like \Omega = (-\infty, \infty), as in Hermite functions, the integral is over the entire real line, requiring the functions to decay sufficiently for square-integrability. Weighted inner products extend the standard form by incorporating a positive weight function w(x) > 0 to emphasize certain regions of the domain, particularly useful for orthogonal polynomials on specific intervals. The general form is \langle f, g \rangle = \int_a^b f(x) g(x) w(x) \, dx for real functions, or with \overline{g(x)} for complex cases, where w(x) ensures the integral defines a valid inner product satisfying the same axioms: positivity, linearity, and (conjugate) symmetry. Examples include w(x) = 1 on [-\pi, \pi] for trigonometric functions or w(x) = e^{-x^2} on (-\infty, \infty) for Hermite polynomials, adapting the inner product to the natural measure of the space. These weights preserve the structure of Hilbert spaces when the resulting norm is complete.

Normalization and orthonormal sets

In inner product spaces of functions, an orthogonal set \{\phi_n\} can be normalized to form an orthonormal set by scaling each function by the reciprocal of its norm, defined as \|\phi_n\| = \sqrt{\langle \phi_n, \phi_n \rangle}. Thus, the normalized functions are given by \psi_n = \frac{\phi_n}{\|\phi_n\|}, ensuring that the norm of each \psi_n is unity. This process preserves orthogonality while standardizing the scale, which simplifies computations in series expansions and projections. An orthonormal set \{\psi_n\} satisfies \langle \psi_m, \psi_n \rangle = \delta_{mn}, where \delta_{mn} is the , equal to 1 if m = n and 0 otherwise. This property implies that the functions are mutually orthogonal and each has unit norm, providing a convenient basis analogous to the in finite-dimensional spaces. The step is essential for deriving coefficients in function expansions without additional scaling factors. To construct an from a linearly independent set of functions, the Gram-Schmidt process can be applied iteratively. Beginning with the first function \psi_1 = \frac{\phi_1}{\|\phi_1\|}, each subsequent \psi_k (for k \geq 2) is obtained by subtracting from \phi_k its projections onto the previous orthonormal functions \psi_1, \dots, \psi_{k-1}, yielding \tilde{\psi}_k = \phi_k - \sum_{j=1}^{k-1} \langle \phi_k, \psi_j \rangle \psi_j, and then normalizing \psi_k = \frac{\tilde{\psi}_k}{\|\tilde{\psi}_k\|}. This algorithm ensures orthogonality at each step and converges to an for the span of the original set in Hilbert spaces. Orthonormal bases play a key role in representing functions within the space as infinite linear combinations, where any f expands as f = \sum_n c_n \psi_n with coefficients c_n = \langle f, \psi_n \rangle. This facilitates the analysis of function properties through their series components and underpins techniques like . The unit norm ensures that the coefficients directly measure the contribution of each basis element without scaling artifacts.

Properties

Basic properties

Orthogonal sets of functions in an possess several fundamental algebraic properties that facilitate their use in approximations and expansions. A key such is : if \{\phi_n\} is an orthogonal set of nonzero functions, then the only \sum c_n \phi_n = 0 () is the trivial one with all c_n = 0. This independence arises because taking the inner product of the combination with any \phi_k yields c_k \langle \phi_k, \phi_k \rangle = 0, and since \langle \phi_k, \phi_k \rangle > 0, it follows that c_k = 0. Moreover, for any two s u = \sum a_n \phi_n and v = \sum b_m \phi_m in the of a finite orthogonal set, the inner product simplifies to \langle u, v \rangle = \sum a_n \overline{b_n} \|\phi_n\|^2, preserving the diagonal structure inherent to the of the basis functions. In the context of approximations, the coefficients in a finite expansion of a f onto the of an orthogonal set \{\phi_1, \dots, \phi_N\} are uniquely determined by c_n = \frac{\langle f, \phi_n \rangle}{\|\phi_n\|^2} for n = 1, \dots, N. This uniqueness stems from the of the set, ensuring that the representation P_N f = \sum_{n=1}^N c_n \phi_n is the only one in the that matches the projections onto each \phi_n. For orthonormal sets (where \|\phi_n\| = 1 for all n), the formula simplifies to c_n = \langle f, \phi_n \rangle, highlighting the convenience of . The partial expansion P_N f also represents the orthogonal projection of f onto the closed spanned by \{\phi_1, \dots, \phi_N\} in the L^2 space. By the in Hilbert spaces, this projection is the unique element in the subspace that minimizes the L^2 distance \|f - P_N f\|, with the f - P_N f being orthogonal to every in the subspace (i.e., \langle f - P_N f, \phi_n \rangle = 0 for n = 1, \dots, N). Such projections provide the best approximation in the least-squares sense within finite-dimensional orthogonal spans. A quantitative bound on the approximation quality is given by : for any f in the and finite orthogonal set \{\phi_1, \dots, \phi_N\}, \sum_{n=1}^N \frac{|\langle f, \phi_n \rangle|^2}{\|\phi_n\|^2} \leq \|f\|^2, with equality holding if and only if f lies in the span of \{\phi_1, \dots, \phi_N\}. This inequality follows from the applied to the orthogonal decomposition f = P_N f + (f - P_N f), yielding \|f\|^2 = \|P_N f\|^2 + \|f - P_N f\|^2 \geq \|P_N f\|^2, where \|P_N f\|^2 = \sum_{n=1}^N |c_n|^2 \|\phi_n\|^2 = \sum_{n=1}^N \frac{|\langle f, \phi_n \rangle|^2}{\|\phi_n\|^2}. For orthonormal sets, the inequality reduces to \sum_{n=1}^N |\langle f, \phi_n \rangle|^2 \leq \|f\|^2.

Completeness and expansions

In a H equipped with an inner product \langle \cdot, \cdot \rangle, an orthogonal set \{\phi_n\} is said to be (or total) if its is dense in H. Equivalently, the set is complete if the only element f \in H satisfying \langle f, \phi_n \rangle = 0 for all n is the f = 0. This characterization ensures that the orthogonal set can approximate any function in the space arbitrarily closely through finite linear combinations, forming the foundation for representing elements of H via infinite series expansions. The paradigm for such expansions is the in the space L^2[a, b], where the complete orthogonal set of \{1, \cos(2\pi n x / (b-a)), \sin(2\pi n x / (b-a)) \mid n = 1, 2, \dots \} allows any f \in L^2[a, b] to be represented as f(x) = \sum_{n=0}^\infty c_n \phi_n(x), with coefficients c_n = \langle f, \phi_n \rangle / \|\phi_n\|^2. The Riesz-Fischer theorem establishes the completeness of this trigonometric system in L^2, guaranteeing that the partial sums converge to f in the L^2 norm. This result extends to general complete orthonormal bases in separable Hilbert spaces, where every element admits a unique . Criteria for completeness often rely on the density of the span of the orthogonal set within the ambient space. For instance, in L^2[a, b], the Weierstrass approximation theorem implies that polynomials are dense in the continuous functions C[a, b] under the uniform norm, and hence dense in L^2[a, b] under the L^2 norm. Since orthogonal polynomials (such as Legendre or Chebyshev polynomials) form a basis for the space of all polynomials, their span is also dense, establishing . More generally, the Stone-Weierstrass theorem provides a framework for verifying of various orthogonal systems by showing that the algebra they generate is dense in C(K) for compact K. For a complete orthonormal set \{\phi_n\} in a H, the expansion of f \in H exhibits mean-square convergence, meaning that the partial sums s_N(f) = \sum_{n=1}^N \langle f, \phi_n \rangle \phi_n satisfy \lim_{N \to \infty} \|f - s_N(f)\|_H^2 = 0, where \|\cdot\|_H denotes the norm induced by the inner product. This L^2-convergence holds due to the of the and the of H, ensuring that the tail of the series vanishes in norm. In the context of , this mean-square convergence is a direct consequence of the between L^2 and \ell^2 via the Fourier coefficients.

Parseval's theorem

Parseval's identity, also known as , establishes a fundamental relationship between the of a and the coefficients in its orthogonal expansion. For a complete \{\psi_n\} in a , and any f in that space, the identity states that \|f\|^2 = \sum_n |\langle f, \psi_n \rangle|^2, where \langle \cdot, \cdot \rangle denotes the inner product. This equality holds precisely when the orthonormal system is complete, meaning its is dense in the space. A proof sketch relies on , which for any orthonormal system gives \|f\|^2 \geq \sum_n |\langle f, \psi_n \rangle|^2, with equality under . To show the forward direction, implies that for any \epsilon > 0, there exists a finite \sum_{i \in I} \lambda_i \psi_i such that \|f - \sum_{i \in I} \lambda_i \psi_i\|^2 < \epsilon; substituting the optimal coefficients \lambda_i = \langle f, \psi_i \rangle and applying orthogonality yields \|f\|^2 \leq \sum_n |\langle f, \psi_n \rangle|^2 + \epsilon, so equality follows as \epsilon \to 0. The reverse direction uses the identity to demonstrate density of the span. The Plancherel theorem extends Parseval's identity to the continuous setting of Fourier transforms on L^2(\mathbb{R}), stating that for functions f, g \in L^2(\mathbb{R}), \int_{-\infty}^{\infty} f(x) \overline{g(x)} \, dx = \int_{-\infty}^{\infty} \hat{f}(\xi) \overline{\hat{g}(\xi)} \, d\xi, where \hat{f} is the Fourier transform; specializing to f = g shows the transform preserves the L^2 norm. This is the continuous analogue of Parseval's identity for Fourier series. In signal processing, Parseval's identity interprets the squared norm \|f\|^2 as the total energy of the signal f, equating it to the sum of the squared magnitudes of its orthogonal expansion coefficients, thus conserving energy across domains. This principle ensures that transformations like the Fourier series preserve the overall energy content, facilitating analysis in frequency components without loss.

Classical Examples

Trigonometric functions

Trigonometric functions, particularly sines and cosines, form a classic example of an orthogonal set in the space of square-integrable functions on a periodic interval. For a function defined on the interval [0, L] with period L, the standard orthogonal basis consists of the constant function 1 together with the functions \cos\left(\frac{2\pi n x}{L}\right) and \sin\left(\frac{2\pi n x}{L}\right) for n = 1, 2, \dots. These functions are mutually orthogonal with respect to the inner product \langle f, g \rangle = \int_0^L f(x) g(x) \, dx. Specifically, the integral \int_0^L \cos\left(\frac{2\pi m x}{L}\right) \cos\left(\frac{2\pi n x}{L}\right) \, dx = 0 if m \neq n, equals L if m = n = 0 (the constant term), and L/2 if m = n \geq 1. Similar relations hold for the sines: \int_0^L \sin\left(\frac{2\pi m x}{L}\right) \sin\left(\frac{2\pi n x}{L}\right) \, dx = 0 if m \neq n and L/2 if m = n \geq 1. The cross terms vanish: \int_0^L \sin\left(\frac{2\pi m x}{L}\right) \cos\left(\frac{2\pi n x}{L}\right) \, dx = 0 for all m, n \geq 0. These orthogonality relations enable the decomposition of periodic functions into series expansions using this basis, known as . Any sufficiently smooth periodic function f(x) with period L can be expressed as f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty \left[ a_n \cos\left(\frac{2\pi n x}{L}\right) + b_n \sin\left(\frac{2\pi n x}{L}\right) \right], where the coefficients are given by a_n = \frac{2}{L} \int_0^L f(x) \cos\left(\frac{2\pi n x}{L}\right) \, dx \quad (n \geq 0), \quad b_n = \frac{2}{L} \int_0^L f(x) \sin\left(\frac{2\pi n x}{L}\right) \, dx \quad (n \geq 1). The factor of 2 in the coefficients for n \geq 1 arises from the normalization constants L/2 in the orthogonality integrals, ensuring the basis functions project appropriately onto the function space. This framework was developed by Joseph Fourier in his 1822 treatise Théorie Analytique de la Chaleur, where he introduced these series to solve the heat equation for periodic boundary conditions, demonstrating that arbitrary functions could be represented as superpositions of sines and cosines.

Orthogonal polynomials

Orthogonal polynomials form a sequence \{P_n(x)\}_{n=0}^\infty, where P_n(x) is a polynomial of exact degree n, and they satisfy \langle P_m, P_n \rangle = 0 for all m \neq n with respect to an inner product defined on a suitable space of functions. This inner product is typically given by \langle f, g \rangle = \int_I f(x) g(x) w(x) \, dx, where I is an interval and w(x) is a positive weight function ensuring the integral converges. Such sequences are uniquely determined up to scaling by the Gram-Schmidt orthogonalization of the basis \{1, x, x^2, \dots\} with respect to this inner product, assuming the moment problem is determined. Among the classical families of orthogonal polynomials, the Legendre, Hermite, and Laguerre polynomials stand out due to their connections to fundamental physical and mathematical problems. The Legendre polynomials P_n(x) are orthogonal over the interval [-1, 1] with uniform weight w(x) = 1, often normalized such that P_n(1) = 1. The Hermite polynomials H_n(x) are defined on (-\infty, \infty) with Gaussian weight w(x) = e^{-x^2}, typically scaled by a leading coefficient factor like $2^n. The Laguerre polynomials L_n(x) (for the standard case with parameter \alpha = 0) operate on [0, \infty) with weight w(x) = e^{-x}. These families satisfy the orthogonality condition \int_I P_m(x) P_n(x) w(x) \, dx = h_n \delta_{mn}, where h_n > 0 is the squared norm and \delta_{mn} is the . A key algebraic property of orthogonal polynomials is their satisfaction of a three-term , which allows efficient computation and reveals their structure. In general, this takes the form x P_n(x) = a_n P_{n+1}(x) + b_n P_n(x) + c_n P_{n-1}(x), with coefficients a_n, b_n, c_n depending on the norms and the measure. For the specifically, the relation simplifies (with b_n = 0 due to symmetry) to (n+1) P_{n+1}(x) = (2n+1) x P_n(x) - n P_{n-1}(x), valid for n \geq 1, with initial conditions P_0(x) = 1 and P_1(x) = x. This recurrence enables recursive generation of the polynomials without explicit integration. Generating functions provide a unified generating mechanism for these polynomials, encoding the entire sequence in a single . For the , the ordinary is \exp(2xt - t^2) = \sum_{n=0}^\infty H_n(x) \frac{t^n}{n!}, which facilitates derivations of sums, integrals, and asymptotic behaviors. Similar exist for the other classical families, such as \left(1 - 2xt + t^2\right)^{-1/2} = \sum_{n=0}^\infty P_n(x) t^n for . The Rodrigues formula offers an explicit differential construction for , bypassing the need for recursive computation in some contexts. In a general form applicable to these families, the polynomials can be expressed as P_n(x) = \frac{1}{w(x)} \frac{d^n}{dx^n} \left[ w(x) (x - a)^n \right] or variants thereof, where a is chosen appropriately for the interval (e.g., a = 0 for Laguerre). Specific instances include the Hermite case H_n(x) = (-1)^n e^{x^2} \frac{d^n}{dx^n} e^{-x^2} and the Laguerre case L_n(x) = \frac{1}{e^{-x}} \frac{d^n}{dx^n} \left( e^{-x} x^n \right). For , the variant is P_n(x) = \frac{1}{2^n n!} \frac{d^n}{dx^n} (x^2 - 1)^n. This formula highlights the role of the weight function in the higher-order process.

Binary-valued functions

Binary-valued orthogonal functions are a class of functions that take only the values +1 and -1, making them particularly suitable for domains and discrete . Unlike continuous orthogonal systems such as or polynomials, these functions are discontinuous and piecewise constant, often resembling square waves. The foundational examples are the Rademacher functions, introduced by Hans Rademacher in 1922 as part of his study on series of general orthogonal functions. The Rademacher system is defined on the interval [0,1] by r_n(t) = \sign(\sin(2^n \pi t)), \quad n = 0, 1, 2, \dots, where \sign denotes the . These functions form an orthogonal set over [0,1] with respect to the standard L^2 inner product, satisfying \int_0^1 r_m(t) r_n(t) \, dt = \delta_{mn}, though the system is incomplete in L^2[0,1]. The extend the Rademacher system to a for L^2[0,1], as established by Joseph L. Walsh in 1923. , denoted \wal_k(t) for k = 0, 1, 2, \dots, are constructed as products of Rademacher functions: specifically, if the binary representation of k is k = \sum_{i=0}^m a_i 2^i with a_i \in \{0,1\}, then \wal_k(t) = \prod_{i=0}^m r_i(t)^{a_i}. This product structure generates square-wave-like functions that switch between +1 and -1 at dyadic rationals. The Walsh system is orthonormal, meaning \langle \wal_p, \wal_q \rangle = \int_0^1 \wal_p(t) \wal_q(t) \, dt = \delta_{pq}. Various orderings exist, such as the natural (Hadamard) order or sequency order, which arrange the functions by the number of sign changes. In the discrete setting, Walsh functions are closely connected to Hadamard matrices, which serve as finite analogs for orthogonal expansions. Hadamard matrices of order $2^n have entries \pm 1 and orthogonal rows (up to scaling), with the rows corresponding exactly to sampled in natural order. This link enables efficient computation via the fast Walsh-Hadamard transform, analogous to the but with binary operations suited to digital hardware. find applications in , where their binary nature facilitates efficient representation and manipulation of discrete signals. For instance, they enable compact expansions for compression by concentrating energy in low-sequency components, reducing storage needs in systems like and .

Advanced Examples and Applications

Rational functions

Orthogonal rational functions generalize by allowing prescribed poles, typically defined as functions of the form \Phi_n(z) = \frac{p_n(z)}{\pi_n(z)}, where p_n(z) is a of degree at most n and \pi_n(z) = \prod_{k=1}^n (z - \alpha_k) incorporates the fixed poles \alpha_k outside the domain of , ensuring the functions are proper rational (degree of numerator \leq degree of denominator). These functions form an in the space L_n = \{ p_n(z)/\pi_n(z) : p_n \in \mathcal{P}_n \} with respect to a suitable inner product, often involving a positive measure \mu whose support avoids the poles to guarantee square integrability. The condition \langle \Phi_m, \Phi_n \rangle_\mu = 0 for m \neq n holds, where the inner product incorporates the poles implicitly through the denominator structure. A classical example arises on the interval [-1, 1] with poles at the endpoints \pm 1, constructed using Szegő polynomials via the Joukowski transformation x = (z + z^{-1})/2, which maps the unit circle to [-1, 1]. Here, the poles \beta_k on the real line relate to those \tilde{\alpha}_k on the unit circle by \beta_k = \tilde{\alpha}_{2k-1} + 1/\tilde{\alpha}_{2k}^2, yielding orthogonal rationals that extend the theory of Szegő polynomials—originally orthogonal on the unit circle with respect to Lebesgue measure—to this interval setting. The inner product is typically \langle f, g \rangle_u = \int_{-1}^1 f(x) g(x) u(x) \, dx for a weight u(x) > 0 on [-1, 1], ensuring the functions are square integrable despite the endpoint poles. For instance, uniform weight u \equiv 1 recovers connections to Chebyshev polynomials as a special case. Construction of orthogonal rational functions proceeds via recurrence relations analogous to those for polynomials, often incorporating reflection coefficients L_n. For proper rationals on the unit circle, the recurrence is given by \begin{bmatrix} \Phi_n(z) \\ \Phi_n^*(z) \end{bmatrix} = e_n \begin{bmatrix} z - \alpha_{n-1} & 0 \\ 0 & z - \alpha_{n-1} \end{bmatrix} \begin{bmatrix} 1 & L_n \\ L_n & 1 \end{bmatrix} \begin{bmatrix} \zeta_{n-1}(z) & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \Phi_{n-1}(z) \\ \Phi_{n-1}^*(z) \end{bmatrix}, with initial \Phi_0 = 1, normalizing constants e_n^2 = (1 - |\alpha_n|^2)/(1 - |\alpha_{n-1}|^2) \cdot 1/(1 - |L_n|^2), and \zeta_{n-1}(z) = (z - \alpha_{n-1})/(1 - \overline{\alpha_{n-1}} z). Alternatively, they can be built from orthogonal polynomials by inversion techniques or Gram-Schmidt orthogonalization of Blaschke products b_n(x) = \prod_{k=0}^n Z_k(x) with Z_k(x) = x / (1 - x/\beta_k). A Christoffel-Darboux-like formula adapts for the reproducing kernel: k_{n-1}(z, w) = \frac{\Phi_n^*(z) \Phi_n(w) - \Phi_n(z) \Phi_n^*(w)}{1 - \zeta_n(z) \overline{\zeta_n(w)}}, facilitating interpolation and quadrature applications. For complex cases on the unit disk, inner products may vary, such as discrete forms \langle f, g \rangle_\mu = \sum_{k=1}^N |\lambda_k|^2 f(e^{i\omega_k}) \overline{g(e^{i\omega_k})} for system identification. These functions are unique up to scaling by a unimodular constant \epsilon_n (with |\epsilon_n| = 1) or normalization ensuring positive leading coefficients, and they satisfy a three-term recurrence similar to orthogonal polynomials, enabling efficient computation and . This uniqueness stems from the Gram-Schmidt process applied to the basis of rational functions with fixed poles, guaranteeing a monic or normalized sequence. Recent advances include orthogonal rational algorithms for functions in systems (2022) and generation procedures using rational (2021).

Role in differential equations

Orthogonal functions play a central role in the solution of boundary value problems for linear second-order differential equations through Sturm-Liouville theory. In this framework, the Sturm-Liouville is defined as L y = -\frac{d}{dx} \left( p(x) \frac{dy}{dx} \right) + q(x) y, leading to the eigenvalue problem L y = \lambda w(x) y, where p(x) > 0, w(x) > 0, and q(x) are given functions on an , with appropriate boundary conditions. The eigenfunctions \{\phi_n(x)\} corresponding to distinct eigenvalues \{\lambda_n\} are orthogonal with respect to the weight function w(x) in the inner product \langle f, g \rangle = \int_a^b f(x) g(x) w(x) \, dx. Classic examples of such eigenfunctions include the , which solve the Legendre equation on the interval [-1, 1] with p(x) = 1 - x^2, q(x) = 0, and w(x) = 1, arising in problems with spherical symmetry. emerge as solutions to the radial part of in polar coordinates, forming orthogonal eigenfunctions on [0, a] for the Bessel equation in Sturm-Liouville form with p(r) = r, q(r) = 0, and w(r) = r. Similarly, Hermite functions serve as eigenfunctions for the , solving the Hermite equation on (-\infty, \infty) with p(x) = 1, q(x) = 0, and w(x) = e^{-x^2}. For a general nonhomogeneous L y = f(x) with homogeneous boundary conditions, the solution can be expanded as y(x) = \sum_{n=1}^\infty c_n \phi_n(x), where the coefficients c_n are determined by the c_n = \frac{\langle f, \phi_n \rangle}{\langle \phi_n, \phi_n \rangle}, leveraging the from the Sturm-Liouville problem. In Sturm-Liouville problems, where p(x) and w(x) are positive and continuous on a finite with separated boundary conditions, the form a complete orthogonal set in the weighted L^2 space with norm \| y \|^2 = \int_a^b |y(x)|^2 w(x) \, dx. This completeness ensures that any sufficiently smooth satisfying the boundary conditions can be uniquely represented by the . The theoretical foundations trace back to David Hilbert's work on integral equations from 1904 to 1910, where he developed using orthogonal systems of eigenfunctions to solve Fredholm equations, paving the way for modern in Hilbert spaces.

Use in Fourier analysis

Orthogonal functions are fundamental to , enabling the decomposition of signals into frequency components through expansions in bases like complex exponentials. The arises as the continuous limit of trigonometric , where periodic functions are expanded on finite intervals using orthogonal terms, and the interval length extends to infinity. In this framework, a function f(x) in L^2(\mathbb{R}) is represented via the \hat{f}(\omega) = \int_{-\infty}^{\infty} f(x) e^{-i \omega x} \, dx, with the inverse recovering f(x) almost everywhere. The complex exponentials e^{i \omega x} serve as a continuous orthogonal "basis" in the sense of distributions, allowing unique decomposition without overlap between frequencies. The Plancherel theorem underscores this orthogonality by establishing that the Fourier transform is a unitary operator on L^2(\mathbb{R}), preserving the L^2 norm: \|f\|_{L^2} = \|\hat{f}\|_{L^2}. This isometry ensures energy conservation in signal processing applications, such as filtering, where transformations maintain the total power of the signal. In discrete settings, the (DFT) employs a finite set of complex exponentials as basis functions on grids of length N: specifically, the vectors \{ e^{2\pi i k n / N} \mid n = 0, \dots, N-1 \} for each k = 0, \dots, N-1, which are mutually and form a basis for \mathbb{C}^N. This facilitates efficient computation via algorithms like the (FFT), which exploit the structure to reduce complexity from O(N^2) to O(N \log N), enabling real-time spectral analysis in . For multiresolution analysis, wavelets extend methods by providing orthogonal bases localized in both time and frequency, addressing limitations of global exponentials for non-stationary signals. The , constructed from binary step functions (e.g., the mother wavelet \psi(x) = 1 for $0 \leq x < 1/2 and -1 for $1/2 \leq x < 1, zero elsewhere), generates an through dilations and translations, offering a simple multiscale decomposition. The convolution theorem leverages this orthogonality: the converts convolution in the —\int f(\tau) g(x - \tau) \, d\tau—to of transforms, \hat{f}(\omega) \hat{g}(\omega), simplifying operations like filtering or system response computation. This property holds analogously for discrete cases and expansions, reducing complex integrals to efficient multiplications in the transform domain. Post-2000 developments, including time-stretch dispersive transforms, have advanced STFT for ultrafast optical measurements and real-time by integrating photonic hardware for higher resolution and speed, with recent applications in ultrafast as of 2025.

References

  1. [1]
    [PDF] Orthogonality • Orthonormal bases - UCLA Mathematics
    Definition A collection(v. ½. , v2, ..., v n ) of vectors is said to be orthogo - nal if every pair of vectors is orthogonal to each other (i.e. (v i, v j)=0.<|control11|><|separator|>
  2. [2]
    Differential Equations - Periodic Functions & Orthogonal Functions
    Nov 16, 2022 · Definition. Two non-zero functions, f(x) and g(x) , are said to be orthogonal on a≤x≤b a ≤ x ≤ b if, ∫baf(x)g(x)dx=0.
  3. [3]
    Lecture 8: Orthogonal Functions | Mathematics | MIT OpenCourseWare
    Lecture 8: Orthogonal Functions ... Track Description: Herb Gross defines and illustrates the Fourier representation of a piecewise continuous function.
  4. [4]
    [PDF] Orthogonal Functions Class Notes by Bob Parker 1. Orthogonality ...
    So in general we will define the condition of orthogonality as the situation when x ⋅ y = 0, and both |x|, |y| > 0. Suppose in the n-dimensional space IRn we ...
  5. [5]
    [PDF] Inner Product Spaces - Purdue Math
    Feb 16, 2007 · The following definition extends the idea of orthogonality into an arbitrary inner product space. DEFINITION 4.12.1. Let V be an inner ...
  6. [6]
    [PDF] 2 Inner Product Spaces, part 1
    Definition 2.1. An inner product space (V, ⟨ , ⟩) is a vector space V over 1 together with an inner product: a function ⟨ , ⟩ : V × V → 1 satisfying the ...
  7. [7]
    [PDF] 1 Inner Products and Hilbert Spaces
    Definition 1.1 (Pre-Hilbert space) A vector space equipped with an inner product and the norm induced by the inner product is called a pre-Hilbert space. 1.3 ...
  8. [8]
  9. [9]
    [PDF] Weighted Inner Products and Sturm-Liouville Equations
    Apr 8, 2014 · Weighted inner products have exactly the same algebraic properties as the “ordinary” inner product. In particular, we can deduce the.
  10. [10]
    [PDF] Inner Product Spaces and Orthogonal Functions
    May 1, 2008 · • Weighted inner product of complex vectors: Let u and v be complex vectors and let Q be a Hermitian positive-definite matrix; that is, Q ...
  11. [11]
    4.4: Orthogonality and Normalization - Mathematics LibreTexts
    Jun 17, 2024 · The norm of a function is now defined as the square root of the inner-product of a function with itself (again, as in the case of vectors),.
  12. [12]
    [PDF] Orthogonal Functions and Fourier Series
    Suppose {φn(x)} is an infinite orthogonal set of functions on an interval [a , b] and y=f (x) is a function defined on this interval. Then, where. ❑ This is ...
  13. [13]
    [PDF] Chapter 6: Hilbert Spaces - UC Davis Math
    In this section, we show that every Hilbert space has an orthonormal basis, which may be finite, countably infinite, or uncountable.
  14. [14]
    15.9: Orthonormal Basis Expansions - Engineering LibreTexts
    May 22, 2022 · We can think of orthonormal basis as a set of building blocks we use to construct functions. We will build up the signal/vector as a weighted sum of basis ...
  15. [15]
    [PDF] 6.4 The Gram-Schmidt Procedure - UC Berkeley math
    The algorithm used in the next proof is called the Gram-Schmidt procedure. It gives a method for turning a linearly independent list into an orthonormal.
  16. [16]
    [PDF] Chapter 8 Gram-Schmidt Orthogonalization - bingweb
    Sep 8, 2010 · 8.1 Gram-Schmidt Procedure I. Gram-Schmidt orthogonalization is a method that takes a non-orthogonal set of linearly independent function and ...
  17. [17]
    [PDF] Contents 3 Inner Product Spaces - Evan Dummit
    • Proposition (Orthogonality and Independence): In any inner product space, every orthogonal set of nonzero vectors is linearly independent. ◦ Proof ...
  18. [18]
    [PDF] Orthogonality in inner product spaces.
    A normed vector space is a vector space endowed with a norm. The norm defines a distance function on the normed vector space: dist(x,y) = kx − yk.
  19. [19]
    [PDF] 2. orthogonality - fourier expansions - UTK Math
    Thus, to each u ∈ H there corresponds a unique sequence {αn} of scalars (the Fourier coefficients of u) such that P |αn|2 < ∞. Conversely, we have the.
  20. [20]
    [PDF] 1 Orthogonal Projections - LSU Math
    We shall study orthogonal projections onto closed subspaces of H. In summary, we show: • If X is any closed subspace of H then there is a bounded linear ...
  21. [21]
    Bessel's Inequality -- from Wolfram MathWorld
    Bessel's inequality is an inequality for a generalized Fourier series if the functions are not a complete orthogonal system. It becomes an equality if they are.
  22. [22]
    None
    Below is a merged summary of the sections from Kreyszig's *Introductory Functional Analysis with Applications*, consolidating all information from the provided summaries into a comprehensive response. To maximize detail and clarity, I will use a table format in CSV style for each section, followed by additional narrative details where necessary. The table will include page references, definitions, theorems, examples, and any additional context or notes from the summaries.
  23. [23]
    [PDF] Chapter 7: Fourier Series - UC Davis Math
    The L2-convergence of Fourier series is particularly simple. It is nevertheless interesting to ask about other types of convergence. For example, the ...
  24. [24]
    [PDF] On the Riesz-Fischer theorem
    A theorem states that some of. Riesz's results hold in the case of an abstract inner product space, and leads to maximal orthonormal systems which are not total ...
  25. [25]
    Stone's theorem and completeness of orthogonal systems
    Apr 9, 2009 · It is well known (e.g. Stone [1]) that the Stone-Weierstrass approximation theorem can be used to prove the completeness of various systems ...
  26. [26]
    [PDF] Orthogonal systems in Hilbert space and applications
    Jul 13, 2023 · This sequence is square-summable as stated in the following theorem. Theorem 3 (Bessel's inequality). In a Hilbert space H, let (en) be an ...
  27. [27]
    The Plancherel Formula - Advanced Analysis
    Jan 17, 2024 · The Plancherel formula is the continuous analogue of the Parseval identity for Fourier series. It is one of the most important identities in analysis.
  28. [28]
    Lab2: Discrete Fourier Transform
    Energy conservation (Parseval's Theorem)​​ We will next show that the energy conservates, namely that the energy of the signal in time is equal to the energy of ...
  29. [29]
    [PDF] Orthogonal polynomials, a short introduction - arXiv
    Nov 11, 2021 · Abstract. This paper is a short introduction to orthogonal polynomials, both the general theory and some special classes.
  30. [30]
    [PDF] Legendre Polynomials and Their Use for Karhunen-Lo\`eve Expansion
    Jul 13, 2025 · This section introduces the basic theory of orthogonal polynomials and re-derives the classical three-term recurrence relation for Legendre ...<|control11|><|separator|>
  31. [31]
    Einige Sätze über Reihen von allgemeinen Orthogonalfunktionen
    Rademacher, H. Einige Sätze über Reihen von allgemeinen Orthogonalfunktionen. Math. Ann. 87, 112–138 (1922). https://doi.org/10.1007/BF01458040. Download ...
  32. [32]
  33. [33]
    [PDF] Fast Transformations with Walsh-Hadamard Functions - MacSphere
    Fourier, Walsh-Hadamard, and Haar transforms have been examined for their effectiveness in data compression, in a mean square error sense or signal-to-noise ...<|control11|><|separator|>
  34. [34]
    Orthogonal Rational Functions - Cambridge University Press
    Cambridge Core - Numerical Analysis and Computational Science - Orthogonal Rational Functions. ... This book generalises the classical theory of orthogonal ...
  35. [35]
    [PDF] ORTHOGONAL RATIONAL FUNCTIONS
    In this thesis we explore the natural generalization of orthogonal polyno- mials (OPs) to the orthogonal rational functions (ORFs) with prescribed.
  36. [36]
    Orthogonal Rational Functions and Structured Matrices - SIAM.org
    The linear space of all proper rational functions with prescribed poles is considered. Given a set of points zi in the complex plane and the weights wi we
  37. [37]
    Sturm-Liouville Equation -- from Wolfram MathWorld
    Sturm-Liouville Equation ... eigenfunctions. The solutions of this equation satisfy important mathematical properties under appropriate boundary conditions ( ...
  38. [38]
    [PDF] Sturm-Liouville Theory
    EXAMPLE 3 Legendre polynomials and Bessel functions. (a) Looking back at Example 1(a), we see that, when Legendre's equation is put in Sturm-Liouville form ...
  39. [39]
    [PDF] Sturm-Liouville Theory and Special Functions
    As we shall demonstrate below, Bessel functions arise naturally when one applies Separation of Variables to a Laplace operator expressed in polar coordinates.
  40. [40]
    [PDF] Lecture 4. Sturm-Liouville eigenvalue problems - UC Davis Math
    The simplest example of a Sturm-Liouville operator is the constant-coefficient second-derivative operator, whose eigenfunctions are trigonometric functions.
  41. [41]
    [PDF] Highlights in the History of Spectral Theory
    Although Hilbert originally used infinite matrices merely as convenient ap- proximations to integral equations, he concluded his theoretical investigation by.<|control11|><|separator|>
  42. [42]
    [PDF] EE 261 - The Fourier Transform and its Applications
    ... Convolution, Really ... Theorem . . . . . . . . . . . . . . . . . . . . . 116. 3.7 The Central Limit Theorem: The Bell Curve Tolls for Thee ...
  43. [43]
    Fourier Series -- from Wolfram MathWorld
    Fourier series make use of the orthogonality relationships of the sine and cosine functions. The computation and study of Fourier series is known as ...
  44. [44]
    Plancherel's Theorem -- from Wolfram MathWorld
    Plancherel's theorem states that the integral of the squared modulus of a function is equal to the integral of the squared modulus of its spectrum.
  45. [45]
    The Discrete Fourier Transform (DFT) - Stanford CCRMA
    This basic ``architecture'' extends to all linear orthogonal transforms, including wavelets, Fourier transforms, Fourier series, the discrete-time Fourier ...
  46. [46]
    15.11: Haar Wavelet Basis - Engineering LibreTexts
    May 22, 2022 · In Haar wavelet basis, the basis functions are scaled and ... Any two basis functions are orthogonal. fig9a.png (a) Same scale ...
  47. [47]
    [PDF] Fourier Transform, Convolution Theorem, and Linear Dynamical ...
    The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or “unitary”, which is the same concept when the ...
  48. [48]
    Recent advances on time-stretch dispersive Fourier transform and ...
    Here, we review a number of landmark results obtained using DFT-based technologies, including several recent advances and key selected applications.