Fact-checked by Grok 2 weeks ago

Orthonormality

Orthonormality is a property of a set of vectors in an inner product space where the vectors are pairwise orthogonal—meaning their inner product is zero for distinct vectors—and each vector has unit length, or norm one. This concept generalizes the idea of perpendicular unit vectors from Euclidean geometry to abstract vector spaces equipped with an inner product. In linear algebra, an orthonormal set \{ \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n \} satisfies \langle \mathbf{v}_i, \mathbf{v}_j \rangle = \delta_{ij}, where \delta_{ij} is the Kronecker delta (1 if i = j, 0 otherwise). If this set spans the entire space, it forms an orthonormal basis, which is linearly independent and simplifies representations of vectors, as the coordinates of any vector \mathbf{x} in this basis are simply the inner products \langle \mathbf{x}, \mathbf{v}_i \rangle. Orthonormal bases also lead to orthogonal matrices, where the columns (or rows) form such a set, preserving norms and angles under transformation since P^T P = I for an orthogonal matrix P. Orthonormality is essential for theoretical results like the spectral theorem, which guarantees that every symmetric matrix (or self-adjoint operator in finite dimensions) has an orthonormal basis of eigenvectors, allowing diagonalization via A = QDQ^T where Q is orthogonal and D is diagonal. In applications, orthonormal bases facilitate efficient computations in areas such as least-squares problems, QR decompositions for solving linear systems, and projections onto subspaces. For instance, in signal processing and harmonic analysis, the Fourier basis provides an orthonormal basis for L^2 spaces, enabling the decomposition of functions or signals into frequency components via coefficients that are straightforward inner products. Similarly, in quantum mechanics, orthonormal bases of eigenstates represent observables, underscoring the concept's role in physical modeling.

Overview

Intuitive Explanation

Orthonormality draws a direct analogy to the perpendicular directions we encounter in everyday physical space, such as the x- and y-axes on a standard graph or map, where these axes intersect at right angles and serve as reference lines of equal, standardized scale. Just as these axes allow us to locate points without bias toward any particular direction, an orthonormal set in mathematics consists of directions (or vectors) that are mutually perpendicular and each scaled to a uniform "unit" length, providing a clean, balanced framework for describing positions and movements. At its core, orthogonality captures the idea of "no overlap" in direction—much like how north and east on a compass point independently without favoring one over the other—ensuring that components along each direction do not interfere or project onto one another. Orthonormality builds on this by enforcing that each such direction has exactly unit length, akin to using rulers of identical size along those perpendicular paths, which prevents any stretching or shrinking that could complicate measurements. This combination makes the system inherently fair and efficient, mirroring how perpendicular shelves in a room can store items without wasting space through misalignment. The practical appeal of orthonormality lies in how it streamlines coordinate-based calculations, similar to rotating a map while keeping all distances and angles intact—no distortion occurs because the reference directions remain perpendicular and uniformly scaled. This preservation of structure, rooted in the geometric properties of perpendicular unit directions, facilitates easier transformations and projections in various applications, from engineering designs to data analysis, by avoiding the need for compensatory adjustments.

Simple Example

A simple example of an orthonormal set occurs in the Euclidean plane \mathbb{R}^2 using the standard basis vectors \mathbf{v}_1 = (1, 0) and \mathbf{v}_2 = (0, 1). To verify orthonormality, compute the inner products (dot products) under the standard Euclidean inner product. First, \langle \mathbf{v}_1, \mathbf{v}_1 \rangle = 1 \cdot 1 + 0 \cdot 0 = 1, confirming \mathbf{v}_1 has unit length. Similarly, \langle \mathbf{v}_2, \mathbf{v}_2 \rangle = 0 \cdot 0 + 1 \cdot 1 = 1, so \mathbf{v}_2 also has unit length. The cross inner product is \langle \mathbf{v}_1, \mathbf{v}_2 \rangle = 1 \cdot 0 + 0 \cdot 1 = 0, showing orthogonality (zero inner product between distinct vectors). This set \{ \mathbf{v}_1, \mathbf{v}_2 \} is orthonormal because the inner products satisfy \langle \mathbf{v}_i, \mathbf{v}_j \rangle = \delta_{ij}, where \delta_{ij} is the Kronecker delta (equal to 1 if i = j and 0 otherwise). These vectors form a foundational "ruler and compass" for measuring in the plane, enabling precise coordinates and projections without scaling issues, as they align directly with the Euclidean metric.

Formal Definition

In Inner Product Spaces

An inner product space, also known as a pre-Hilbert space, is a vector space V over the real numbers \mathbb{R} or complex numbers \mathbb{C} equipped with an inner product \langle \cdot, \cdot \rangle: V \times V \to \mathbb{F}, where \mathbb{F} is the underlying field, satisfying three key axioms for all vectors u, v, w \in V and scalars \alpha, \beta \in \mathbb{F}:
  • Linearity in the first argument: \langle \alpha u + \beta v, w \rangle = \alpha \langle u, w \rangle + \beta \langle v, w \rangle.
  • Conjugate symmetry: \langle u, v \rangle = \overline{\langle v, u \rangle}, where the bar denotes complex conjugation (this reduces to symmetry \langle u, v \rangle = \langle v, u \rangle over \mathbb{R}).
  • Positive-definiteness: \langle v, v \rangle \geq 0, with equality if and only if v = 0.
These axioms ensure the inner product behaves analogously to the standard dot product in Euclidean space. The inner product induces a norm on V defined by \|v\| = \sqrt{\langle v, v \rangle} for all v \in V, turning V into a normed vector space where the norm satisfies the properties of positivity, homogeneity, and the triangle inequality. Additionally, two vectors u, v \in V are orthogonal if \langle u, v \rangle = 0, providing a geometric interpretation of perpendicularity generalized beyond finite-dimensional Euclidean spaces. Inner product spaces generalize the Euclidean dot product from \mathbb{R}^n to arbitrary dimensions and were first conceptualized as vector spaces with such a structure by Giuseppe Peano in 1898. The framework was significantly advanced in the early 1900s through David Hilbert's work on integral equations, leading to the development of complete inner product spaces known as Hilbert spaces. This structure underpins the definition of orthonormality for sets of vectors.

Orthonormal Sets

In an inner product space, an orthonormal set is a collection of vectors \{v_i\}_{i \in I} such that the inner product satisfies \langle v_i, v_j \rangle = \delta_{ij} for all indices i, j \in I, where \delta_{ij} denotes the , which equals 1 if i = j and 0 otherwise. This condition ensures that each vector is orthogonal to every other distinct vector in the set and has unit norm. An orthonormal set is thus equivalent to an orthogonal set—meaning \langle v_i, v_j \rangle = 0 for all i \neq j—in which every vector additionally has norm \|v_i\| = 1. The normalization to unit length distinguishes orthonormality from mere orthogonality, providing a standardized basis for computations in the space. While finite orthonormal sets are common in finite-dimensional spaces, general inner product spaces allow for infinite orthonormal sets, which are typically indexed by countable index sets to align with the structure of the space.

Properties and Theorems

Basic Properties

Orthonormal sets exhibit several fundamental algebraic properties that arise directly from their defining characteristics. A key property is linear independence: if \{v_1, \dots, v_n\} is an orthonormal set in an inner product space and \sum_{k=1}^n c_k v_k = 0 for scalars c_k, then each c_k = 0. To see this, take the inner product of both sides with v_j: \left\langle \sum_{k=1}^n c_k v_k, v_j \right\rangle = \langle 0, v_j \rangle = 0. By orthonormality, this simplifies to c_j \langle v_j, v_j \rangle = c_j = 0 for each j, since \langle v_j, v_j \rangle = 1. Another essential property concerns the expansion of vectors within the span of an orthonormal set. For any vector v in the span of \{v_1, \dots, v_n\}, it can be uniquely expressed as v = \sum_{j=1}^n c_j v_j, where the coefficients are given by c_j = \langle v, v_j \rangle. This follows from substituting the expansion into the inner product with v_i: \langle v, v_i \rangle = \left\langle \sum_{j=1}^n c_j v_j, v_i \right\rangle = \sum_{j=1}^n c_j \langle v_j, v_i \rangle = c_i, since \langle v_j, v_i \rangle = \delta_{ij}, the Kronecker delta (which equals 1 if i=j and 0 otherwise). In the context of matrix representations, the change-of-basis matrix from a standard basis to an orthonormal basis (or between two orthonormal bases) is unitary. Such a matrix U satisfies U^* U = I, where U^* is the conjugate transpose, and it preserves inner products: \langle U x, U y \rangle = \langle x, y \rangle for all vectors x, y. This preservation ensures that distances and angles remain unchanged under the transformation, reflecting the geometric invariance of orthonormality.

Existence of Orthonormal Bases

In finite-dimensional inner product spaces, every such space possesses an orthonormal basis. This existence is established by applying the Gram-Schmidt process to any Hamel basis of the space. The Gram-Schmidt process provides an explicit construction of an orthonormal basis from a linearly independent set \{v_1, v_2, \dots, v_n\}. It proceeds iteratively: for each k = 1, 2, \dots, n, compute the orthogonal component u_k = v_k - \sum_{j=1}^{k-1} \frac{\langle v_k, u_j \rangle}{\|u_j\|^2} u_j, and then normalize to obtain e_k = u_k / \|u_k\|, yielding the orthonormal set \{e_1, e_2, \dots, e_n\}. This algorithm was formalized by Erhard Schmidt in his 1907 paper on least-squares solutions to linear equations. In the infinite-dimensional setting, every Hilbert space admits an orthonormal basis. The proof uses Zorn's lemma to select a maximal orthonormal set, which spans a dense subspace whose closure is the entire space. For separable Hilbert spaces, such a basis can be chosen to be countable.

Examples and Applications

Finite-Dimensional Spaces

In finite-dimensional real Euclidean spaces such as \mathbb{R}^n equipped with the standard dot product, the standard basis \{e_1, e_2, \dots, e_n\} provides a fundamental example of an orthonormal set. Each basis vector e_i is defined as the column vector with a 1 in the i-th position and 0s elsewhere, ensuring that the inner product \langle e_i, e_j \rangle = \delta_{ij}, where \delta_{ij} is the Kronecker delta (equal to 1 if i = j and 0 otherwise). This orthonormality simplifies coordinate representations, as any vector x = (x_1, \dots, x_n)^T expands as x = \sum_{i=1}^n x_i e_i, with coefficients directly given by the inner products x_i = \langle x, e_i \rangle. More generally, any orthonormal basis in \mathbb{R}^n can be obtained by rotating or reflecting the standard basis, corresponding to the columns of an orthogonal matrix Q. An n \times n matrix Q is orthogonal if its columns form an orthonormal set, satisfying Q^T Q = I, where I is the identity matrix; equivalently, the rows are also orthonormal. Such matrices preserve the Euclidean norm and inner product, as \|Qx\| = \|x\| and \langle Qx, Qy \rangle = \langle x, y \rangle for all vectors x, y. Orthonormal bases can be constructed from arbitrary bases using the Gram-Schmidt process. A key application of orthonormal bases in finite dimensions arises in solving linear systems Ax = b, particularly when A is symmetric and thus orthogonally diagonalizable as A = QDQ^T with Q orthogonal and D diagonal containing the eigenvalues. Substituting yields QD(Q^T x) = b, or letting y = Q^T x, the system simplifies to Dy = Q^T b, which is solved componentwise by division since D has nonzero entries on the diagonal (assuming A is invertible). The solution is then x = Qy, leveraging the orthonormality of Q's columns for efficient computation. This transformation reduces the problem to scalar divisions, highlighting the computational advantages of orthonormal coordinates.

Infinite-Dimensional Spaces

In infinite-dimensional settings, the concept of orthonormality is generalized within Hilbert spaces, which are complete inner product spaces where every Cauchy sequence converges to an element in the space. Completeness ensures that orthonormal sets can be extended to bases that span the space in a suitable sense, distinguishing these spaces from mere inner product spaces. An orthonormal basis in a Hilbert space H is a maximal orthonormal set \{ e_n \}_{n \in I}, where I is typically countable for separable spaces, such that the closed linear span of \{ e_n \} is all of H. This means that every element f \in H can be represented as an infinite linear combination f = \sum_{n \in I} \langle f, e_n \rangle e_n, with convergence in the norm topology. Every Hilbert space admits such an orthonormal basis, a result that relies on the Zorn's lemma applied to partially ordered orthonormal sets or the Gram-Schmidt process for countable dense subsets in separable cases. A concrete example is the space L^2[0,1] of real-valued square-integrable functions on the interval [0,1], equipped with the inner product \langle f, g \rangle = \int_0^1 f(x) g(x) \, dx. This is a separable Hilbert space, and while specific bases exist, the key property is that its orthonormal bases consist of functions whose finite linear combinations are dense in L^2[0,1] with respect to the L^2-norm. In contrast to general Banach spaces, where a Schauder basis provides unique expansions but may not be orthonormal, every Hilbert space possesses an orthonormal Schauder basis due to the inner product structure allowing orthogonalization. Not all Schauder bases in Hilbert spaces are orthonormal, but the existence of an orthonormal one simplifies expansions and preserves the inner product via Parseval's theorem, which equates \|f\|^2 = \sum_{n} |\langle f, e_n \rangle|^2 for any f and orthonormal basis \{e_n\}.

Fourier Analysis

In Fourier analysis, the functions \left\{ \frac{1}{\sqrt{2\pi}}, \frac{\cos(nx)}{\sqrt{\pi}}, \frac{\sin(nx)}{\sqrt{\pi}} \mid n = 1, 2, \dots \right\} form an orthonormal basis for the Hilbert space L^2[-\pi, \pi] equipped with the inner product \langle f, g \rangle = \int_{-\pi}^{\pi} f(x) g(x) \, dx. Orthonormality is verified through integral identities: for distinct basis functions \phi_m and \phi_n, \langle \phi_m, \phi_n \rangle = 0, while \langle \phi_n, \phi_n \rangle = 1 for each n, as the integrals of \cos(nx) \cos(mx), \sin(nx) \sin(mx), and cross terms over [-\pi, \pi] yield \pi \delta_{mn} (or $2\pi for the constant term) before normalization. Any square-integrable function f \in L^2[-\pi, \pi] admits an expansion f(x) = \sum c_n \phi_n(x), where the Fourier coefficients are given by c_n = \langle f, \phi_n \rangle = \int_{-\pi}^{\pi} f(x) \phi_n(x) \, dx. This series converges to f in the L^2 norm, decomposing the function into its frequency components. Parseval's identity establishes energy conservation in this representation: \|f\|^2 = \int_{-\pi}^{\pi} |f(x)|^2 \, dx = \sum |c_n|^2, linking the total energy of the signal to the sum of squared coefficient magnitudes. In modern digital signal processing, the discrete Fourier transform (DFT) extends this framework to finite sequences, where the columns of the unitary DFT matrix \mathbf{F}/\sqrt{N} (with entries F_{jk} = \omega^{jk}/\sqrt{N}, \omega = e^{-2\pi i / N}) form an orthonormal basis for \mathbb{C}^N, enabling efficient frequency-domain analysis while preserving energy via unitarity.

References

  1. [1]
    Orthogonal Sets
    A set of vectors is orthogonal if different vectors in the set are perpendicular to each other. An orthonormal set is an orthogonal set of unit vectors.
  2. [2]
    6.3 Orthogonal bases and projections - Understanding Linear Algebra
    Definition 6.3.6. An orthonormal set is an orthogonal set of vectors each of which has unit length.
  3. [3]
    [PDF] Orthogonality • Orthonormal bases - UCLA Mathematics
    Definition A collection(v. ½. , v2, ..., v n ) of vectors is said to be orthogo - nal if every pair of vectors is orthogonal to each other (i.e. (v i, v j)=0.
  4. [4]
    [PDF] Linear Algebra and Differential Equations Chapter Summaries
    O rthog- onal and orthonormal bases have special properties of practical and theoretical importance. Any basis can generate an orthonormal basis by the process ...
  5. [5]
    [PDF] 21. Orthonormal Bases - UC Davis Math
    Definition A matrix P is orthogonal if P−1 = PT . Then to summarize,. Theorem. A change of basis matrix P relating two orthonormal bases is an orthogonal matrix ...
  6. [6]
    [PDF] Unit 17: Spectral theorem
    The spectral theorem states that every symmetric matrix has an orthonormal eigenbasis. Symmetric matrices have only real eigenvalues.
  7. [7]
    [PDF] Orthogonal matrices and Gram-Schmidt - MIT OpenCourseWare
    In this lecture we finish introducing orthogonality. Using an orthonormal ba sis or a matrix with orthonormal columns makes calculations much easier. The. Gram- ...
  8. [8]
    [PDF] Chapter 7: Fourier Series - UC Davis Math
    In this section, we introduce a special class of orthonormal bases of L2([0,1]) and. L2(“), called wavelets. These bases have proved to be very useful in signal ...
  9. [9]
    Linear Algebra for Quantum Mechanics - Galileo
    It is important to note that a linear operator applied successively to the members of an orthonormal basis might give a new set of vectors which no longer ...
  10. [10]
    [PDF] ! ! ! ! ! ! ! ! 2.4.5 Orthonormal Bases and Coordinate Frames ...
    Any set of two 2D vectors u and v form an orthonormal basis provided they are orthogonal (at right angles) and are each of unit length. ... perpendicular. In this.
  11. [11]
    Vectors
    u and v are called an orthonormal basis for the 2-dimensional space, or just "a set of perpendicular unit vectors". It should not be a big surprise that to do ...
  12. [12]
    Gram-Schmidt Method – Calculus Tutorials
    It often greatly simplifies calculations to work in an orthogonal basis. ... For more abstract spaces, however, the existence of an orthonormal basis is not ...
  13. [13]
    [PDF] • Rotations • Camera calibration • Homography • Ransac
    Definition: an orthogonal transformation perserves dot products. Linear ... rotation, and an angle of rotation about that vector. 3D: amount and axis of ...
  14. [14]
    [PDF] Orthogonality and the Gram-Schmidt Process - Sites at Lafayette
    Returning to our example of two different basis for R2, it is easy to see that the red vectors below form an orthonormal set: They are both orthogonal and ...
  15. [15]
    [PDF] Chapter 6: Hilbert Spaces - UC Davis Math
    Definition 6.2 A Hilbert space is a complete inner product space. In particular, every Hilbert space is a Banach space with respect to the norm in (6.1).
  16. [16]
    Inner products and \ell^2 - Ryan Tully-Doyle
    An inner product space is a pair consisting of a complex vector space V and an inner product ⟨ ⋅ , ⋅ ⟩ on . V.
  17. [17]
    [PDF] 7 Inner Products
    The axiomatic definition of an inner product was first given by Giuseppe. Peano in 1898. ... An inner product space is a vector space together with a specified.
  18. [18]
    (PDF) The development of inner product spaces and its generalization
    Aug 6, 2025 · This paper will discuss a survey related to the development of the inner product space and its generalization. These generalization include semi ...
  19. [19]
    [PDF] Orthogonality in inner product spaces.
    An orthogonal set S ⊂ V is called orthonormal if kxk = 1 for any x ∈ S. Remark. Vectors v1,v2,...,vk ∈ V form an orthonormal set if and only if hvi, ...
  20. [20]
    [PDF] inner product spaces - UC Davis Mathematics
    An orthogonal set in which each vector has norm 1 is said to be orthonormal. Theorem. If S = {v1, v2, … , vn} is an orthogonal set of nonzero vectors in an ...
  21. [21]
    [PDF] Notes on Inner Product Spaces
    Orthonormal sets are linearly independent. Proof. Suppose that {v1,v2,...} is an orthonormal set and that Pi xivi = 0. Then for each j ...
  22. [22]
    [PDF] Linear Algebra - Chapter 5: Norms, Inner Products and Orthogonality
    Orthonormal sets. Definition. A set {u1,u2,...,un}⊆V is called ... We want to show linear independence, i.e., that n. X j=1 αjuj = 0 =⇒ αj = 0 ...
  23. [23]
    [PDF] Contents 3 Inner Products - Evan Dummit
    In particular, if S is an orthonormal basis, then each ck = hv, eki. ◦ Proof: Since S is a basis, there do exist such coefficients ci and they are unique. ◦ We ...
  24. [24]
    [PDF] Norms and Inner Products - Stanford AI Lab
    Jul 21, 2018 · A nice property of orthonormal bases is that vectors' coefficients in terms of this basis can be computed via the inner product. Proposition ...
  25. [25]
    [PDF] Notes on Linear Algebra and Matrix Analysis - USC Dornsife
    It corresponds, like similarity, to a change of basis, but of a special type – a change from one orthonormal basis to another. An orthonormal change of basis ...
  26. [26]
    [PDF] linear algebra: summary of results 1. basic facts
    (change of basis formula) Let B = {eЫ, . . . en},B' = {ЯЫ, . . . Яn} be two ... Equivalently (by the polarization identities), T preserves the inner product of ...<|control11|><|separator|>
  27. [27]
    [PDF] Inner Product Spaces - Linear Algebra Done Right
    6.34 Existence of orthonormal basis. Every finite-dimensional inner product space has an orthonormal basis. Proof Suppose V is finite-dimensional. Choose a ...
  28. [28]
    Gram‐Schmidt orthogonalization: 100 years and more
    Jun 4, 2012 · In 1907, Erhard Schmidt published a paper in which he introduced an orthogonalization algorithm that has since become known as the classical ...
  29. [29]
    Inner Product Spaces - jstor
    A natural question to ask is whether every inner product space possesses an orthonormal basis. Suprisingly enough, we shall see that the answer to this question ...
  30. [30]
    [PDF] arXiv:quant-ph/0412013v1 2 Dec 2004
    Dec 2, 2004 · The uniqueness of the decomposition in this theorem, of course, allows for re-orderings of the terms and changes in phase factors. Theorem 2.1 ...
  31. [31]
    Orthonormal Basis -- from Wolfram MathWorld
    Such a basis is called an orthonormal basis. The simplest example of an orthonormal basis is the standard basis e_i for Euclidean space R^n . The vector e_i is ...
  32. [32]
    [PDF] Euclidean Space - HKUST Math Department
    If in addition all vectors in S has unit norm, it is called an orthonormal basis for W. Example 3.5. The standard basis {e1,e2,···en} for Rn is an orthonormal ...
  33. [33]
    Orthogonal Matrices and Gram-Schmidt | Linear Algebra | Mathematics
    Many calculations become simpler when performed using orthonormal vectors or othogonal matrices. In this session, we learn a procedure for converting any basis ...Missing: systems | Show results with:systems
  34. [34]
    [PDF] Lecture 22: Diagonalization and powers of A - MIT OpenCourseWare
    In this lecture we learn to diagonalize any matrix that has n independent eigenvectors and see how diago nalization simplifies calculations. The lecture ...
  35. [35]
    [PDF] MATH 265: Elementary Linear Algebra - TigerWeb
    Solving linear systems. The truth is that linear systems are so ubiquitous ... A = UtDU is its orthogonal diagonalization. By the principal axes ...
  36. [36]
    [PDF] Orthonormal Bases in Hilbert Space.
    Definition 0.5 A Hilbert space is a complete inner product space. (Complete ... An orthonormal basis a complete orthonormal system. Theorem 0.2 Let {xn} ...
  37. [37]
    [PDF] Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied ...
    Dec 2, 2017 · A total orthonormal set in an inner product space is called an orthonormal basis. ... If H is separable, every orthonormal set in H is countable.
  38. [38]
    [PDF] FUNCTIONAL ANALYSIS1 Douglas N. Arnold2 References
    An orthonormal basis in a Hilbert space is a special example of a. Schauder basis. A subset E of a Banach space X is called a Schauder basis if for every x ...<|control11|><|separator|>
  39. [39]
    [PDF] 1 Inner Products and Hilbert Spaces
    Definition 1.2 (Hilbert space) Hilbert space is a pre-Hilbert space which is complete in the norm induced by the inner product. In what follows, we will assume ...
  40. [40]
    [PDF] Hilbert Spaces - Penn Engineering
    perpendicular to y. ... Equation (5.17) is known as a generalized Fourier series expansion (or orthonormal expansion) of x relative to the orthonormal basis !<|control11|><|separator|>
  41. [41]
    [PDF] CHAPTER 4 FOURIER SERIES AND INTEGRALS
    Fourier transform from function to vector is like an orthogonal matrix. Normalized by constants. √. 2π and. √ π, we have an orthonormal basis in function space.
  42. [42]
    [PDF] 18.102 S2021 Lecture 15. Orthonormal Bases and Fourier Series
    Apr 13, 2021 · An orthonormal basis of H is a countable maximal orthonormal subset {en} of H. Many of the examples we've encountered so far, like Cn,`2, and L2 ...Missing: applications | Show results with:applications
  43. [43]
    Discrete Fourier Transform | Definition, inverse, matrix form - StatLect
    It transforms a vector into a set of coordinates with respect to a basis whose vectors have two important characteristics: they are orthogonal;