Fact-checked by Grok 2 weeks ago

Orthogonal basis

In linear algebra, an orthogonal basis for a of an is a basis consisting of nonzero vectors that are pairwise orthogonal, meaning the inner product of any two distinct vectors in the set is zero. Such bases are particularly valuable because they simplify the representation of vectors within the subspace as linear combinations of the basis vectors, with coefficients directly computable via inner products. Orthogonal bases extend naturally to orthonormal bases by normalizing the vectors to have unit length, which further streamlines computations such as orthogonal projections onto the . A key property is that any orthogonal set of nonzero vectors is linearly independent, ensuring that an orthogonal basis with n vectors in an n-dimensional fully spans it without redundancy. The Gram-Schmidt process provides a systematic method to construct an orthogonal basis from any given basis, making these structures accessible for practical applications in solving systems of equations and . These concepts are foundational in areas like signal processing and quantum mechanics, where orthogonality reflects physical independence, such as mutually perpendicular directions or non-interfering wave functions. In finite-dimensional Euclidean spaces, orthogonal bases facilitate efficient matrix diagonalization and least-squares approximations, underscoring their role in numerical methods and optimization.

Fundamentals

Definition

In an , which is a over the real numbers \mathbb{R} or complex numbers \mathbb{C} equipped with an inner product \langle \cdot, \cdot \rangle—a positive-definite (bilinear over \mathbb{R}) that satisfies \langle v, v \rangle \geq 0 for all v with equality if and only if v = 0—the inner product generalizes the and induces a \|v\| = \sqrt{\langle v, v \rangle}. A basis for such a vector space V is a set of vectors \{v_1, \dots, v_n\} that is linearly independent (no nontrivial linear combination equals zero) and spans V (every vector in V is a unique linear combination of them). An orthogonal basis for an inner product space V is a basis \{v_1, \dots, v_n\} such that the vectors are pairwise orthogonal, meaning \langle v_i, v_j \rangle = 0 for all i \neq j, with each v_i \neq 0. This orthogonality simplifies representations in V, as it ensures no overlap in the directions of the basis vectors under the inner product. An orthogonal basis differs from an orthonormal basis, where in addition \|v_i\| = 1 (or \langle v_i, v_i \rangle = 1) for all i; to obtain an orthonormal basis from an orthogonal one, normalize each vector by dividing by its norm: \hat{v}_i = v_i / \|v_i\|./10:_Inner_Product_Spaces/10.02:_Orthogonal_Sets_of_Vectors) For example, consider \mathbb{R}^2 as an with the standard \langle (x_1, y_1), (x_2, y_2) \rangle = x_1 x_2 + y_1 y_2. The standard \{(1,0), (0,1)\} is orthogonal, since \langle (1,0), (0,1) \rangle = 0, and in fact orthonormal, as each has $1$. The concept of orthogonal bases originated in 19th-century developments on spaces, with establishing key properties via the Cauchy-Schwarz inequality around 1821 and advancing orthogonal curvilinear coordinate systems in his 1854 work on geometry.

Properties

In an inner product space, a key property of an orthogonal basis \{v_1, v_2, \dots, v_n\} is that any v in the can be uniquely expressed as v = \sum_{i=1}^n \frac{\langle v, v_i \rangle}{\langle v_i, v_i \rangle} v_i, where the coefficients \frac{\langle v, v_i \rangle}{\|v_i\|^2} are known as the Fourier coefficients relative to the basis. This expansion arises because orthogonality ensures that the projection of v onto each basis vector v_i is of the others, simplifying the process. To see the uniqueness of these coefficients, suppose v = \sum c_i v_i = \sum d_i v_i. Then \sum (c_i - d_i) v_i = 0. Taking the inner product with v_j yields (c_j - d_j) \|v_j\|^2 = 0, so c_j = d_j since \|v_j\|^2 > 0. This follows from the of the inner product and the condition \langle v_i, v_j \rangle = 0 for i \neq j. A significant consequence is , which states that for any v in the span, \|v\|^2 = \sum_{i=1}^n \frac{|\langle v, v_i \rangle|^2}{\|v_i\|^2}. This identity reflects the generalized to orthogonal decompositions, quantifying how the energy or norm of v distributes across the basis vectors. In finite-dimensional inner product spaces, any orthogonal set is linearly independent, as the coefficient uniqueness argument above implies no nontrivial sums to zero. Thus, if an orthogonal set spans the space, it forms a basis; completeness in this simply requires spanning the entire space. Orthogonal bases are preserved under transformations that preserve the inner product: in real inner product spaces, if \{v_i\} is orthogonal and U is an (satisfying U^T U = I), then \{U v_i\} is also orthogonal because \langle U v_i, U v_j \rangle = \langle v_i, v_j \rangle. In complex inner product spaces, the analogous transformations are unitary matrices satisfying U^* U = I, where U^* is the , and \langle U v_i, U v_j \rangle = \langle v_i, U^* U v_j \rangle = \langle v_i, v_j \rangle. This preservation stems from such matrices maintaining inner products. While orthogonal bases are not unique—any rescaling of the vectors or reordering yields another orthogonal basis—they provide canonical decompositions via the Fourier coefficients, offering a standardized way to represent vectors in the space. Orthonormal bases, where each \|v_i\| = 1, represent a normalized special case that simplifies these coefficients to \langle v, v_i \rangle.

Finite-Dimensional Inner Product Spaces

Construction Methods

In finite-dimensional inner product spaces, an orthogonal basis can be constructed from a given linearly set of s through successive orthogonalization, where each subsequent is adjusted by subtracting its onto the of the previous orthogonal s. Specifically, for a linearly list \{u_1, u_2, \dots, u_n\}, define v_1 = u_1 and for k = 2, \dots, n, set v_k = u_k - \sum_{j=1}^{k-1} \proj_{v_j} u_k, yielding an orthogonal set \{v_1, v_2, \dots, v_n\} that spans the same . This process relies on the orthogonal \proj_v u = \frac{\langle u, v \rangle}{\langle v, v \rangle} v, which ensures that u - \proj_v u is orthogonal to v. To illustrate, consider constructing an orthogonal basis for \mathbb{R}^3 with the standard from the set \{(1,0,0), (1,1,0), (1,1,1)\}. Set v_1 = (1,0,0). Then v_2 = (1,1,0) - \proj_{v_1} (1,1,0) = (1,1,0) - \frac{(1,1,0) \cdot (1,0,0)}{(1,0,0) \cdot (1,0,0)} (1,0,0) = (1,1,0) - (1,0,0) = (0,1,0). Finally, v_3 = (1,1,1) - \proj_{v_1} (1,1,1) - \proj_{v_2} (1,1,1) = (1,1,1) - (1,0,0) - (0,1,0) = (0,0,1), resulting in the orthogonal basis \{(1,0,0), (0,1,0), (0,0,1)\}. The existence of such an orthogonal basis is guaranteed for any finite-dimensional : given any basis, the successive orthogonalization procedure produces an orthogonal basis for the space. This result follows from the of orthogonal sets of nonzero vectors and the spanning properties preserved in the construction. While theoretically robust, numerical implementations of this construction, particularly the classical variant, can encounter stability issues due to rounding errors leading to loss of , especially for ill-conditioned bases; modified approaches or reorthogonalization mitigate these in practice, though the focus here remains on the theoretical method.

Orthonormalization Processes

The Gram-Schmidt process is a fundamental algorithm for constructing an from a given linearly independent set of vectors in a finite-dimensional . It achieves this by iteratively orthogonalizing each subsequent vector against the previously constructed orthogonal set and then normalizing. The process was initially developed by Jørgen Pedersen Gram in his 1883 paper on approximations and further formalized by in 1907, who presented it as a method for orthogonalizing systems of functions, establishing its classical form. The classical Gram-Schmidt algorithm proceeds as follows for a linearly independent set \{u_1, u_2, \dots, u_n\}:
  1. Set v_1 = u_1 / \|u_1\|, where \|\cdot\| denotes the induced by the inner product.
  2. For k = 2 to n, compute the orthogonal component by subtracting the projections onto the previous orthonormal vectors: first form the remainder w_k = u_k - \sum_{i=1}^{k-1} \langle u_k, v_i \rangle v_i, where \langle \cdot, \cdot \rangle is the inner product, and then normalize v_k = w_k / \|w_k\|.
This yields an orthonormal set \{v_1, v_2, \dots, v_n\} that spans the same as the original set. The explicit recursive formula for each orthonormal vector is v_k = \frac{u_k - \sum_{i=1}^{k-1} \langle u_k, v_i \rangle v_i}{\left\| u_k - \sum_{i=1}^{k-1} \langle u_k, v_i \rangle v_i \right\|}, \quad k = 2, \dots, n. For example, consider the basis \{(1,1), (1,0)\} of \mathbb{R}^2 with the standard inner product. First, v_1 = (1,1) / \sqrt{2} = (1/\sqrt{2}, 1/\sqrt{2}). Then, \langle (1,0), v_1 \rangle = 1/\sqrt{2}, so the is (1/\sqrt{2}) v_1 = (1/2, 1/2), and the remainder is (1,0) - (1/2, 1/2) = (1/2, -1/2). Normalizing gives v_2 = (1/2, -1/2) / (1/\sqrt{2}) = (1/\sqrt{2}, -1/\sqrt{2}). Thus, the orthonormal basis is \{ (1/\sqrt{2}, 1/\sqrt{2}), (1/\sqrt{2}, -1/\sqrt{2}) \}. A variant known as the modified Gram-Schmidt algorithm improves by altering the order of operations to reduce error accumulation in finite-precision arithmetic. In the classical version, all inner products \langle u_k, v_i \rangle for i < k are computed using the original u_k before subtraction, which can propagate rounding errors. The modified version instead subtracts projections sequentially: for each j = 1 to k-1, update the current remainder by subtracting \langlecurrent remainder, v_j\rangle v_j. This forward substitution-like approach ensures that errors in earlier orthogonalizations are less amplified in later steps, making it more robust for ill-conditioned matrices. The process assumes the input vectors are linearly independent; if they are not, it fails when a remainder has zero norm during normalization, which serves as a detection mechanism for linear dependence.

Coordinates and Applications

Coordinate Representations

In an , an orthogonal basis simplifies the representation of vectors by decoupling their coordinates. For a \mathbf{v} expressed as \mathbf{v} = \sum_{i=1}^n c_i \mathbf{v}_i with respect to an orthogonal basis \{\mathbf{v}_1, \dots, \mathbf{v}_n\}, the coefficients are given by c_i = \frac{\langle \mathbf{v}, \mathbf{v}_i \rangle}{\langle \mathbf{v}_i, \mathbf{v}_i \rangle}, allowing each c_i to be computed independently via inner products without solving a , unlike in a general basis where coordinates require inverting the basis matrix. The change-of-basis matrix from an arbitrary basis to an orthogonal one obtained via the Gram-Schmidt process is upper triangular. This arises because the Gram-Schmidt algorithm sequentially orthogonalizes vectors while preserving the span of initial segments, resulting in a matrix R with zeros below the diagonal when expressing the original basis in terms of the new orthogonal one, as in the QR factorization A = QR where Q has orthogonal columns and R is upper triangular. In \mathbb{R}^n equipped with the standard , the standard basis vectors \mathbf{e}_1 = (1,0,\dots,0), \dots, \mathbf{e}_n = (0,\dots,0,1) form an orthogonal basis, and the coordinates of any \mathbf{v} = (v_1, \dots, v_n) are simply its components c_i = v_i, which are the orthogonal projections onto these axes. Orthogonal bases facilitate efficient computations of geometric quantities, such as the squared norm \|\mathbf{v}\|^2 = \sum_{i=1}^n |c_i|^2 \|\mathbf{v}_i\|^2, which decomposes the energy of \mathbf{v} additively across basis directions without cross terms. For orthonormal bases, where \|\mathbf{v}_i\| = 1, this simplifies further to \|\mathbf{v}\|^2 = \sum_{i=1}^n |c_i|^2 with c_i = \langle \mathbf{v}, \mathbf{v}_i \rangle. In finite dimensions, orthogonal bases serve as a discrete analog to Fourier series expansions, where the discrete Fourier transform decomposes signals into coefficients over an orthogonal basis of complex exponentials, mirroring the projection onto trigonometric functions in the continuous case.

Diagonalization of Operators

In finite-dimensional inner product spaces, the spectral theorem for self-adjoint operators asserts that every self-adjoint linear operator admits an orthonormal basis of eigenvectors, allowing the operator to be diagonalized with respect to this basis. A linear operator A on such a space is self-adjoint if it satisfies \langle A \mathbf{u}, \mathbf{v} \rangle = \langle \mathbf{u}, A \mathbf{v} \rangle for all vectors \mathbf{u}, \mathbf{v} in the space, with respect to the given inner product. This property ensures that the eigenvalues are real and that the eigenvectors corresponding to distinct eigenvalues are orthogonal, enabling the construction of an orthonormal basis via processes like Gram-Schmidt orthogonalization applied iteratively to the eigenspaces. The diagonalization takes the form A = Q D Q^*, where Q is a unitary matrix whose columns form an orthonormal basis of eigenvectors, D is a diagonal matrix containing the real eigenvalues, and Q^* is the conjugate transpose of Q. An outline of the proof proceeds by induction on the dimension: for the base case, the operator has a real eigenvalue with an eigenvector; subsequent steps involve restricting to orthogonal complements of eigenspaces and applying Gram-Schmidt to orthogonalize bases within degenerate eigenspaces, ensuring the full orthonormal eigenbasis. A representative example arises in , where the L = D - A of an undirected graph—with D the and A the —is symmetric and thus with respect to the standard inner product. This allows L = Q \Lambda Q^T, where Q provides an orthogonal eigenbasis that reveals graph connectivity and partitioning properties through the eigenvalues in \Lambda. This framework finds key applications in , where self-adjoint operators represent observables, and the yields an of eigenvectors corresponding to possible measurement outcomes. In statistics, employs the eigendecomposition of the symmetric —self-adjoint under the standard inner product—to identify orthogonal directions of maximum variance for data .

Infinite-Dimensional Settings

Hilbert Spaces

In a Hilbert space H, an orthonormal basis is defined as a countable orthonormal set \{e_n\}_{n=1}^\infty such that every vector v \in H admits the representation v = \sum_{n=1}^\infty \langle v, e_n \rangle e_n, where the infinite sum converges in the norm topology of H. This extends the finite-dimensional notion, where convergence is immediate, to the infinite-dimensional setting by requiring norm convergence rather than mere algebraic spanning. Such bases, also known as complete orthonormal sets or orthonormal Schauder bases, ensure that the closed linear span of \{e_n\} equals H. A key property is , which states that for any vector v \in H and any orthonormal set \{e_n\} in H, \sum_{n=1}^\infty |\langle v, e_n \rangle|^2 \leq \|v\|^2, with equality holding for all v if and only if \{e_n\} is a complete ; this equality case is . The Riesz-Fischer theorem characterizes completeness by affirming that the closed subspace spanned by an orthonormal set \{e_n\} coincides with the entire H if and only if the set is complete, meaning holds universally. In separable s, every is countable, reflecting the space's second-countable . A canonical example is the space \ell^2 of square-summable real or complex sequences, where the standard consists of the sequences e_n with a 1 in the n-th position and zeros elsewhere; any element (a_k) \in \ell^2 satisfies (a_k) = \sum_{n=1}^\infty a_n e_n with \sum |a_n|^2 < \infty, ensuring convergence. While Hamel bases—algebraic bases using finite linear combinations—exist for any via the , they are uncountable and pathological in infinite dimensions, lacking and practical utility; orthonormal Schauder bases, by contrast, leverage the structure for convergent expansions. In pre-Hilbert spaces (complete inner product spaces that are not necessarily norm-complete), an orthogonal basis can be orthonormalized and extended to form an for the completion, preserving the expansion property in the limit.

Functional Analysis Contexts

In , orthogonal bases are essential for decomposing functions in specific L² spaces, enabling expansions that facilitate analysis and computation. One prominent example is the , where the set of functions \left\{ \frac{e^{i n x}}{\sqrt{2\pi}} \mid n \in \mathbb{Z} \right\} forms an for L^2[0, 2\pi]. This basis allows any on the interval to be represented as an infinite series, with completeness established through the properties of the , which governs the partial sum operators and ensures dense spanning in the space. Another classical orthogonal basis appears in polynomial approximations on bounded intervals. The Legendre polynomials \{P_n(x)\}_{n=0}^\infty, defined via or recursively, form an orthogonal basis for L^2[-1,1] with respect to the constant weight function 1, meaning \int_{-1}^1 P_m(x) P_n(x) \, dx = 0 for m \neq n. For a f \in L^2[-1,1], the orthogonal expansion is given by f(x) = \sum_{n=0}^\infty a_n P_n(x), where the coefficients are a_n = \frac{\int_{-1}^1 f(x) P_n(x) \, dx}{\int_{-1}^1 P_n^2(x) \, dx} = \frac{2n+1}{2} \int_{-1}^1 f(x) P_n(x) \, dx, since the normalization integral equals $2/(2n+1). This expansion is particularly useful in spectral methods for solving differential equations on symmetric domains. For unbounded domains, bases provide localized orthogonal decompositions. In L^2(\mathbb{R}), orthogonal bases emerge from multiresolution analyses, where a scaling function generates nested subspaces approximating the space at varying resolutions. constructed families of compactly supported orthonormal wavelets that satisfy these properties, allowing efficient representations of functions with both frequency and spatial localization. These bases, such as the Daubechies wavelets of order N, are complete in L^2(\mathbb{R}) and support applications like and numerical PDE solvers. Orthogonal bases also underpin solutions to partial differential equations in , which incorporate smoothness and boundary constraints. For instance, in the H^1_0(0,1) consisting of functions vanishing at the endpoints, the set \left\{ \sqrt{2} \sin(n \pi x) \mid n \in \mathbb{N} \right\} forms an with respect to the L^2 inner product. This sine basis diagonalizes the Laplacian operator under Dirichlet boundary conditions, enabling for problems like the or , where solutions are expanded as series in these eigenfunctions. Convergence of these orthogonal expansions varies by context. In L^2 spaces, the series converge in the norm sense due to the completeness of the basis, akin to relating coefficients to the function's energy. However, pointwise convergence requires stronger conditions on the function, such as piecewise smoothness; otherwise, failures occur, as in the for near jump discontinuities, where partial sums exhibit persistent overshoot of about 9% of the jump height regardless of truncation level. This highlights the distinction between L^2 and pointwise behaviors in functional analytic settings.

Generalizations

Symmetric Bilinear Forms

In the context of a vector space equipped with a symmetric bilinear form B: V \times V \to F, where B(u, v) = B(v, u) for all u, v \in V and F is a field of characteristic not equal to 2, an orthogonal basis is a basis \{v_1, \dots, v_n\} such that B(v_i, v_j) = 0 for all i \neq j. This generalizes the notion from inner product spaces, allowing for forms that may not be positive definite. With respect to such a basis, the matrix representation of B is diagonal, with diagonal entries B(v_i, v_i). Every on a finite-dimensional admits an orthogonal basis, which the associated . This reveals the structure of the form, particularly through Sylvester's law of inertia, which states that for a symmetric bilinear form over the real numbers, the —defined as the triple (p, q, r) where p is the number of positive diagonal entries, q the number of negative ones, and r = n - p - q the nullity—is invariant under . An orthogonal basis thus explicitly displays this , classifying the form up to . For non-degenerate symmetric bilinear forms, where the only orthogonal to all of V is the zero vector (i.e., the is trivial), the W^\perp of any W \subseteq V satisfies \dim W + \dim W^\perp = \dim V, and V = W \oplus W^\perp if the form restricted to W is also non-degenerate. This property ensures that orthogonal decompositions preserve the overall structure of the space. Consider the example on \mathbb{R}^2 with B((x_1, y_1), (x_2, y_2)) = x_1 x_2 - y_1 y_2, which is symmetric but indefinite. The \{(1,0), (0,1)\} is orthogonal, as B((1,0), (0,1)) = 0, and the matrix is \operatorname{diag}(1, -1), revealing signature (1,1,0). The study of orthogonal bases for symmetric bilinear forms traces back to the , linked to Lagrange's investigations of forms in and , where he explored their representations and reductions. This framework was further developed in the , culminating in Sylvester's 1852 formulation of the law of inertia.

Quadratic Forms

A quadratic form on a real vector space is defined as Q(\mathbf{v}) = B(\mathbf{v}, \mathbf{v}), where B is a symmetric bilinear form. In matrix terms, if A is the symmetric matrix representing B with respect to some basis, then Q(\mathbf{x}) = \mathbf{x}^T A \mathbf{x}. With respect to an orthogonal basis consisting of the eigenvectors of A, the quadratic form simplifies to Q(\mathbf{v}) = \sum \lambda_i c_i^2, where the \lambda_i are the eigenvalues of A and the c_i are the coordinates of \mathbf{v} in that basis. The diagonalization theorem for quadratic forms states that any real symmetric quadratic form admits an in which it takes the diagonal form \sum \lambda_i x_i^2, with the \lambda_i real. This follows from the for symmetric matrices, which guarantees that A is orthogonally diagonalizable. The principal axes of the quadratic form are the lines spanned by these orthonormal eigenvectors, providing a aligned with the form's "natural" directions where cross terms vanish. For example, consider the Q(x, y) = x^2 + 2xy + y^2, which equals (x + y)^2 and corresponds to the A = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. The eigenvalues are \lambda_1 = 2 and \lambda_2 = 0, with corresponding orthonormal eigenvectors \mathbf{u}_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} and \mathbf{u}_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix}. In the orthogonal basis \{\mathbf{u}_1, \mathbf{u}_2\}, after a of axes, Q becomes $2u^2, eliminating the cross term. In applications, such as classifying conic sections defined by equations like ax^2 + bxy + cy^2 + dx + ey + f = 0, the of the —determined by the number of positive, negative, and zero eigenvalues of the associated —distinguishes ellipses (all positive or all negative), (mixed signs), parabolas (one zero), and degenerate cases. For instance, diagonalizing x^2 + 8xy - 5y^2 = t yields $3u^2 - 7v^2 = t, confirming a for t \neq 0 based on the opposite signs of the eigenvalues. In the complex case, Hermitian forms Q(\mathbf{v}) = \langle \mathbf{v}, A \mathbf{v} \rangle with A Hermitian are analogously diagonalized using a unitary basis of eigenvectors, yielding real \lambda_i and the form \sum \lambda_i |z_i|^2.

References

  1. [1]
    Orthogonal Sets
    A set of vectors is orthogonal if different vectors in the set are perpendicular to each other. An orthonormal set is an orthogonal set of unit vectors.
  2. [2]
    6.3 Orthogonal bases and projections - Understanding Linear Algebra
    Definition 6.3.22. ... A square m × m matrix Q whose columns form an orthonormal basis for R m is called orthogonal. ... This terminology can be a little confusing.
  3. [3]
    [PDF] Lecture 17: Orthogonality
    Mar 9, 2011 · Orthogonal vectors are linearly independent. A set of n orthogonal vectors in Rn automatically form a basis. Proof: The dot product of a linear ...
  4. [4]
    [PDF] 2 Inner Product Spaces, part 1
    Definition 2.1. An inner product space (V, ⟨ , ⟩) is a vector space V over 1 together with an inner product: a function ⟨ , ⟩ : V × V → 1 satisfying the ...
  5. [5]
    [PDF] Vector spaces and bases - Columbia CS
    Thus, a basis for a vector space V is a minimal collection of vectors by which we can construct all of V simply via linear combination. A basis is finite if its ...
  6. [6]
    [PDF] Inner Product Spaces and Orthogonality - HKUST Math Department
    A basis of V is called an orthogonal basis if their vectors are mutually orthogonal; a basis Bis called an orthonormal basis if B is an orthogonal basis and ...
  7. [7]
    [PDF] 6.2 Orthogonal Sets - UC Berkeley math
    For example, the standard basis in Rn is orthonormal. Orthonormal lists are particularly easy to work with, as illustrated by the next proposition. Proposition.
  8. [8]
    [PDF] 5. Inner Products and Norms - Numerical Analysis Lecture Notes
    May 18, 2008 · It is named after two of the founders of modern analysis, Augustin Cauchy and Herman Schwarz, who established it in the case of the L2 inner ...
  9. [9]
    [PDF] Orthogonal Curvilinear Coordinates | Bemidji State University
    With Riemann's insight, it grew to represent infinite space, and curvilinear coordinate systems were utilized in physical applications. Orthogonal curvilinear ...
  10. [10]
    [PDF] MATH 423 Linear Algebra II Lecture 29: Orthogonal sets.
    Theorem If S = {v1,v2,...,vn} is an orthogonal basis for. V, then the Fourier coefficients of any vector x ∈ V relative to S coincide with the coordinates of x ...
  11. [11]
    [PDF] Interactive Linear Algebra
    This textbook covers linear algebra, focusing on the synthesis between algebra and geometry, and solving matrix equations, including Ax=b, Ax=λx, and Ax=b ...
  12. [12]
    [PDF] MATH 304 Linear Algebra Lecture 28: Orthogonal bases. The Gram ...
    Definition. Nonzero vectors v1,v2,...,vk ∈ V form an orthogonal set if they are orthogonal to each other: (vi,vj) = 0 for i \= j. If, in addition, all vectors ...<|control11|><|separator|>
  13. [13]
    [PDF] Homework assignment, March 22, 2004. - Brown Math Department
    c) Assume now that v1,v2,...,vn is only orthogonal basis, not orthonormal. Can you write down the Parseval's identity in this case? 6. Let V1 and V2 be ...
  14. [14]
    [PDF] 21. Orthonormal Bases - UC Davis Math
    Definition A matrix P is orthogonal if P−1 = PT . Then to summarize,. Theorem. A change of basis matrix P relating two orthonormal bases is an orthogonal matrix ...
  15. [15]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    ... Linear Algebra Done Right, fourth edition, by Sheldon Axler vi. Page 4. Contents vii. 2B Bases 39. Exercises 2B 42. 2C Dimension 44. Exercises 2C 48. Chapter 3.
  16. [16]
    [PDF] Orthogonal Sets of Vectors and the Gram-Schmidt Process
    Feb 16, 2007 · The introduction of an inner product in a vector space opens up the possibility of using similar bases in a general finite-dimensional vector ...Missing: scholarly | Show results with:scholarly
  17. [17]
  18. [18]
    [PDF] Gram-Schmidt Orthogonalization: 100 Years and More
    Apr 4, 2009 · Taking the first equation from each step gives a triangular system defining the solution. Steven Leon, Åke Björck, Walter Gander. Gram-Schmidt ...
  19. [19]
    9.5: The Gram-Schmidt Orthogonalization procedure - Math LibreTexts
    Mar 5, 2021 · This algorithm makes it possible to construct, for each list of linearly independent vectors (resp. basis), a corresponding orthonormal list (resp. orthonormal ...
  20. [20]
    Gram-Schmidt process - StatLect
    The Gram-Schmidt process (or procedure) is a sequence of operations that allow us to transform a set of linearly independent vectors into a set of orthonormal ...Missing: development 1883 1907<|separator|>
  21. [21]
    The modified Gram-Schmidt procedure - UCI Mathematics
    In modified Gram-Schmidt (MGS), we take each vector, and modify all forthcoming vectors to be orthogonal to it.<|separator|>
  22. [22]
    [PDF] Lecture 5 Gram-Schmidt Orthogonalization - DSpace@MIT
    Sep 21, 2006 · • Modified G-S is numerically stable (less sensitive to rounding errors). 4. Classical/Modified Gram-Schmidt for j = 1 to n vj = aj for i = 1 ...
  23. [23]
  24. [24]
    [PDF] Orthogonal Bases and the QR Algorithm
    Jun 5, 2010 · Rotational motions of bodies in three-dimensional space are described by orthogonal matrices, and hence they lie at the foundations of rigid ...
  25. [25]
  26. [26]
    6.3 Orthogonal bases and projections - Understanding Linear Algebra
    This activity introduces an important way of modifying an orthogonal set so that the vectors in the set have unit length.
  27. [27]
    [PDF] Chapter 4 - THE DISCRETE FOURIER TRANSFORM - MIT
    To summarize, computing the N-point DFT of a signal implicitly introduces a periodic signal with period N, so that all operations involving the DFT are really ...
  28. [28]
    [PDF] Proof of the spectral theorem
    Nov 5, 2013 · Suppose S is a selfadjoint operator on a finite-dimensional inner product space V . The function s(v) = hSv,vi takes real values on on V ...
  29. [29]
    [PDF] The Spectral Theorem Let V be a finite-dimensional inner-product ...
    The Spectral Theorem states that a linear operator T on a finite-dimensional inner product space is orthogonally diagonalizable if and only if T is normal.
  30. [30]
    The spectral theorem for self-adjoint operators on a real f.d.i.p.s
    Here's an outline of the standard proof of the Spectral Theorem for self-adjoint oprators T on a finite-dimensional real inner product space V. This is ...<|control11|><|separator|>
  31. [31]
    [PDF] Chapter 16 - Spectral Graph Theory - Computer Science
    Spectral graph theory is the study and exploration of graphs through the eigenvalues and eigenvectors of matrices naturally associated with those graphs.
  32. [32]
    [2211.12742] Spectral theorem for dummies - Quantum Physics - arXiv
    Nov 23, 2022 · John von Neumann's spectral theorem for self-adjoint operators is a cornerstone of quantum mechanics. Among other things, it also provides a ...<|control11|><|separator|>
  33. [33]
    11.2 - How do we find the coefficients? | STAT 505
    The variance-covariance matrix may be written as a function of the eigenvalues and their corresponding eigenvectors. This is determined by the Spectral ...Missing: eigendecomposition | Show results with:eigendecomposition
  34. [34]
    [PDF] Hilbert spaces
    A maximal orthonormal sequence in a separable Hilbert space is called a complete orthonormal basis. This notion of basis is not quite the same as in the finite ...
  35. [35]
    [PDF] Lecture 9 - Functional analysis
    The following important theorem shows that equality in Bessel's inequality is one of criterions for an orthonormal set to be a basis of a Hilbert space. Theorem ...
  36. [36]
    [PDF] math 5210, lecture 8 - riesz-fischer theorem
    Now assume that V is a Hilbert space, i.e. a separable and complete Euclidean space. Let e1,e2,... its orthonormal basis, see the previous lecture. In ...
  37. [37]
    [PDF] 2.11. Schauder Basis
    Jun 3, 2021 · We will see that every separable Hilbert space (a Hilbert space is a complete inner product space) does have a Schauder basis. In fact, all ...
  38. [38]
    [PDF] Applications
    The ek, k ∈ Z, form an orthonormal basis of L2(0, 2π), i.e. are complete: (4.6). ∫ ueikx = 0 ∀ k =⇒ u = 0 in L2(0, 2π). This however, is not so trivial to prove ...
  39. [39]
    [PDF] Introduction to Fourier series 1. Pointwise convergence
    Oct 27, 2016 · We can prove pointwise convergence even before proving that the exponentials give an orthonormal basis for L2[0, 1]. The hypotheses of the ...
  40. [40]
    [PDF] MATH2070: LAB 9: Legendre Polynomials and L2 Approximation
    Oct 31, 2016 · They are appropriate for use on the interval [-1,1] because they are orthogonal when considered as members of L2([−1, 1]). Polynomials that ...
  41. [41]
    [PDF] Solutions: Problem Set 3 Math 201B, Winter 2007 Problem 1. Prove ...
    (x). (b) Show that the Legendre polynomials are orthogonal in L2([−1,1]) ... The orthogonal complement of the polynomials of degree n − 1 in the space ...
  42. [42]
    [PDF] Orthonormal bases of compactly supported wavelets
    Orthonormal bases of compactly supported wavelets are constructed with arbitrarily high regularity, using functions generated by dilations and translations.
  43. [43]
    [PDF] NYU PDE Recitation
    Feb 21, 2025 · I want to find an orthonormal basis for this vector space. But ... Step 2: Scale the sine basis so it becomes orthonormal. By either ...
  44. [44]
    [PDF] petersyy_1.pdf - Deep Blue Repositories
    the heat equation's spatial part was furnished by the Fourier sine basis which forms an orthogonal basis in L2([0,l]). In other words, the basis {sin(nπ/l)} ...<|control11|><|separator|>
  45. [45]
    [PDF] Convergence of Fourier Series - MATH 467 Partial Differential ...
    ▷ We will outline an explanation of the Gibbs phenomenon. Page 31. Partial Sum. Denote the Nth partial sum of the Fourier series as. sN(x) = 4 π. N. X n=1. 1.
  46. [46]
    [PDF] A Study of The Gibbs Phenomenon in Fourier Series and Wavelets
    The following result of Du Bois-Reymond shows that continuity of f does not guarantee pointwise convergence of the Fourier series everywhere. Theorem 2.15 ...
  47. [47]
    [PDF] BILINEAR FORMS The geometry of Rn is controlled algebraically by ...
    Orthogonal bases for symmetric bilinear forms are the subject of Section 4. Symplectic bases for alternating bilinear forms are discussed in Section 5.
  48. [48]
    [PDF] Further linear algebra. Chapter V. Bilinear and quadratic forms.
    Definition 3.2 A basis B is called an orthogonal basis if any two distinct basis vectors are orthogonal. Thus B is an orthogonal basis if and only if [f]B.
  49. [49]
    [PDF] Chapter 3. Bilinear forms - Lecture notes for MA1212
    A bilinear form on V is symmetric if and only if the matrix of the form with respect to some basis of V is symmetric. A real square matrix A is symmetric if and ...
  50. [50]
    [PDF] Linear Algebra: Non-degenerate Bilinear Forms - DPMMS
    The non-degenerate bilinear forms correspond to the isomorphisms from V to V ∗. 1. Page 2. 2. 2. Orthogonal complements. Let ψ : V × V → F be a bilinear form.
  51. [51]
    [PDF] Quadratic forms beyond arithmetic - UCLA Mathematics
    Another milestone in the study of sums of squares is the Four Square Theorem, proven by Lagrange in 1770. This theorem states that every positive integer can be.
  52. [52]
  53. [53]
    [PDF] MATH 223. Quadratic Forms, Conic Sections. Richard Anstee
    These curves would be called Conic Sections and would arise from the intesection of a plane with a double cone (e.g. {(x, y, z) : x2 + y2 = |z|}).Missing: classification signature