Fact-checked by Grok 2 weeks ago

Coordinate vector

In linear algebra, a coordinate vector is the ordered list of scalars that represent a in a finite-dimensional as a linear combination of the vectors in a chosen ordered basis. If B = \{\mathbf{b}_1, \mathbf{b}_2, \dots, \mathbf{b}_n\} is an ordered basis for the vector space V and \mathbf{v} \in V satisfies \mathbf{v} = c_1 \mathbf{b}_1 + c_2 \mathbf{b}_2 + \dots + c_n \mathbf{b}_n, then the coordinate vector of \mathbf{v} relative to B, denoted [\mathbf{v}]_B, is the column vector \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix}. By the unique representation theorem, these coefficients are unique for any basis, ensuring that every in V has exactly one such representation. This coordinatization is fundamental because it establishes an isomorphism between the abstract V and the coordinate space \mathbb{R}^n, via the coordinate mapping that sends each vector to its coordinate vector; this mapping is a bijective linear , allowing computations in V to be performed equivalently in the more concrete setting of \mathbb{R}^n using matrix operations. Coordinate vectors thus enable the matrix representation of linear transformations, where the matrix of a transformation relative to bases B for the domain and C for the codomain has columns that are the coordinate vectors of the images of the basis vectors in B under the transformation. Changes between different bases are handled by invertible change-of-basis matrices, which transform coordinate vectors via multiplication, preserving the underlying linear structure. In applications, coordinate vectors are essential for solving systems of linear equations, diagonalizing matrices, and understanding eigenvalues, as they bridge geometric intuitions with algebraic manipulations across various fields such as , physics, and .

Basic Concepts

Definition

In linear algebra, a vector space V over a F (such as the real numbers \mathbb{R} or complex numbers \mathbb{C}) is a set equipped with operations of vector addition and satisfying certain axioms, including closure, associativity, commutativity, and the existence of additive identities and inverses. A basis B = \{b_1, \dots, b_n\} for a finite-dimensional V is a linearly independent set of vectors that spans V, meaning every vector in V can be uniquely expressed as a finite of the basis vectors. Given such a basis B and a vector v \in V, the coordinate vector of v with respect to B, denoted _B, is the unique ordered n-tuple (c_1, \dots, c_n) \in F^n of scalars satisfying the equation v = c_1 b_1 + \dots + c_n b_n = \sum_{i=1}^n c_i b_i. This representation allows vectors in abstract spaces to be identified with tuples of coordinates in a concrete space like F^n, facilitating computations and analysis. The uniqueness of the coordinates c_1, \dots, c_n follows directly from the of the basis B: if v admitted two distinct representations \sum c_i b_i = \sum d_i b_i, then \sum (c_i - d_i) b_i = 0 with not all coefficients zero, contradicting independence. Thus, the mapping from vectors to their coordinate tuples is a linear between V and F^n. The concept of coordinate vectors emerged in the mid-19th century as part of foundational developments in linear algebra, particularly through William Rowan Hamilton's introduction of quaternions and vector methods in 1843–1844, and Hermann Grassmann's "Die lineale Ausdehnungslehre" (1844), which formalized and n-dimensional extensions of vector spaces, initially emphasizing finite dimensions.

Coordinate Representation

In linear algebra, the coordinate representation of a vector v with respect to an ordered basis B = \{b_1, b_2, \dots, b_n\} for a finite-dimensional V over a F is denoted by _B, which is an n \times 1 column vector in F^n. This notation captures the unique scalars c_1, c_2, \dots, c_n \in F such that v = c_1 b_1 + c_2 b_2 + \dots + c_n b_n, with _B = \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix}. The coordinate map \phi: V \to F^n defined by \phi(v) = _B establishes an isomorphism between V and F^n, meaning it is a bijective linear . Explicitly, if _B = (c_1, c_2, \dots, c_n)^T, then \phi(v) = \sum_{i=1}^n c_i e_i, where \{e_1, e_2, \dots, e_n\} is the for F^n with e_i having a 1 in the i-th position and 0s elsewhere. This representation preserves the vector space structure through linearity: for any v, w \in V and scalars \alpha, \beta \in F, \phi(\alpha v + \beta w) = \alpha \phi(v) + \beta \phi(w), ensuring that addition and scalar multiplication in V correspond directly to those in F^n. The use of column vectors distinguishes this practical form from abstract coordinate tuples by facilitating matrix operations, such as multiplication by transformation matrices, which align naturally with the column-oriented computations in F^n.

Finite-Dimensional Examples

Euclidean Space Example

In the Euclidean space \mathbb{R}^2, the standard basis is given by the vectors \mathbf{e}_1 = (1, 0) and \mathbf{e}_2 = (0, 1), which align with the conventional Cartesian coordinate axes. For a vector \mathbf{v} = (3, 4), the coordinate vector with respect to this basis is \begin{pmatrix} 3 \\ 4 \end{pmatrix}, since \mathbf{v} = 3\mathbf{e}_1 + 4\mathbf{e}_2. This representation directly corresponds to the components of \mathbf{v} in the usual x-y plane. To illustrate coordinate vectors with respect to a non-standard basis, consider the basis B = \{(1, 1), (1, -1)\}. For the same \mathbf{v} = (3, 4), the coordinates [\mathbf{v}]_B are found by solving the c_1 (1, 1) + c_2 (1, -1) = (3, 4), which yields the : \begin{cases} c_1 + c_2 = 3 \\ c_1 - c_2 = 4 \end{cases} Adding the equations gives $2c_1 = 7, so c_1 = 3.5; subtracting them gives $2c_2 = -1, so c_2 = -0.5. Thus, [\mathbf{v}]_B = \begin{pmatrix} 3.5 \\ -0.5 \end{pmatrix}. This computation can be generalized by forming the matrix V whose columns are the basis vectors: V = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, and solving the matrix equation V \mathbf{c} = \mathbf{v} for the coordinate vector \mathbf{c}, where \mathbf{v} = \begin{pmatrix} 3 \\ 4 \end{pmatrix}. The solution is obtained via matrix inversion, yielding V^{-1} = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, or through row reduction of the [V \mid \mathbf{v}]. Both methods confirm \mathbf{c} = \begin{pmatrix} 3.5 \\ -0.5 \end{pmatrix}. Geometrically, the coordinates in a non-standard basis like B represent the scalings needed to reach \mathbf{v} by traveling along the directions defined by the basis vectors, similar to navigating a rotated by 45 degrees, such as in a diagonal street layout where movements are measured along slanted axes rather than horizontal and vertical ones. This provides an alternative way to decompose \mathbf{v} into components aligned with the chosen basis directions.

Polynomial Vector Space Example

The vector space P_2 consists of all polynomials of degree at most 2 over the real numbers, with addition and scalar multiplication defined pointwise: for p(x) = a x^2 + b x + c and q(x) = d x^2 + e x + f, (p + q)(x) = (a + d) x^2 + (b + e) x + (c + f) and (\alpha p)(x) = (\alpha a) x^2 + (\alpha b) x + (\alpha c). This forms a 3-dimensional vector space. A for P_2 is the B = \{1, x, x^2\}, which is ordered and linearly independent, spanning the space since any p(x) = a x^2 + b x + c can be uniquely expressed as p(x) = c \cdot 1 + b \cdot x + a \cdot x^2. The coordinate vector of p with respect to B, denoted _B, is the column vector of coefficients in this expansion, ordered by the basis: _B = \begin{pmatrix} c \\ b \\ a \end{pmatrix}. For example, consider p(x) = 2x^2 + 3x - 1. Here, a = 2, b = 3, c = -1, so _B = \begin{pmatrix} -1 \\ 3 \\ 2 \end{pmatrix}. This representation allows algebraic operations on polynomials to correspond directly to vector addition and in \mathbb{R}^3. To illustrate coordinates with respect to a nonstandard basis, consider the ordered basis B' = \{x^2 - x, x + 1, 1\} for P_2, which is also linearly independent and spans the space. For the same p(x) = 2x^2 + 3x - 1, the coordinates [\alpha, \beta, \gamma]^T satisfy p(x) = \alpha (x^2 - x) + \beta (x + 1) + \gamma (1). Expanding the right side gives \alpha x^2 + (-\alpha + \beta) x + (\beta + \gamma). Equating coefficients with p(x) yields the linear system: \alpha = 2, \quad -\alpha + \beta = 3, \quad \beta + \gamma = -1. Solving step-by-step: \alpha = 2, then \beta = 3 + \alpha = 5, and \gamma = -1 - \beta = -6. Thus, _{B'} = \begin{pmatrix} 2 \\ 5 \\ -6 \end{pmatrix}. This demonstrates how coordinates depend on the chosen basis, requiring solution of a for non-monomial bases. Coordinate representations in polynomial spaces are particularly relevant in interpolation problems, where one seeks a polynomial passing through given points; the coefficients, or coordinates with respect to the monomial basis, solve a Vandermonde system derived from the interpolation conditions.

Change of Basis

Transition Matrix

In a finite-dimensional vector space V over a field F, the transition matrix provides a means to convert coordinate vectors between two distinct bases, ensuring consistent representation of vectors regardless of the chosen basis. For ordered bases \mathcal{B} = \{\mathbf{b}_1, \dots, \mathbf{b}_n\} and \mathcal{C} = \{\mathbf{c}_1, \dots, \mathbf{c}_n\} of V, the transition matrix P_{\mathcal{C} \to \mathcal{B}} is the unique n \times n matrix whose columns are the coordinate vectors of the \mathcal{C}-basis vectors with respect to \mathcal{B}. That is, the i-th column of P_{\mathcal{C} \to \mathcal{B}} is [\mathbf{c}_i]_{\mathcal{B}}, the column vector of coefficients expressing \mathbf{c}_i as a linear combination of the \mathcal{B}-basis vectors. The role of this matrix in coordinate transformation is captured by the equation [\mathbf{v}]_{\mathcal{B}} = P_{\mathcal{C} \to \mathcal{B}} [\mathbf{v}]_{\mathcal{C}} for any vector \mathbf{v} \in V, where [\mathbf{v}]_{\mathcal{B}} and [\mathbf{v}]_{\mathcal{C}} denote the coordinate vectors of \mathbf{v} in the respective bases. This linear relation arises because the coordinate vectors satisfy \mathbf{v} = P_{\mathcal{B}} [\mathbf{v}]_{\mathcal{B}} = P_{\mathcal{C}} [\mathbf{v}]_{\mathcal{C}}, where P_{\mathcal{B}} and P_{\mathcal{C}} are the matrices whose columns are the basis vectors of \mathcal{B} and \mathcal{C} in some fixed ; solving yields the matrix product form above. To construct P_{\mathcal{C} \to \mathcal{B}}, express each \mathbf{c}_i as \mathbf{c}_i = \sum_{j=1}^n p_{ji} \mathbf{b}_j, where the coefficients p_{ji} form the entries of the i-th column [\mathbf{c}_i]_{\mathcal{B}}. This process leverages the spanning property of \mathcal{B} and the of both bases to ensure the representation is unique. Since \mathcal{B} and \mathcal{C} are bases, P_{\mathcal{C} \to \mathcal{B}} is an , with its inverse given by P_{\mathcal{B} \to \mathcal{C}} = P_{\mathcal{C} \to \mathcal{B}}^{-1}, which similarly has columns [\mathbf{b}_j]_{\mathcal{C}}. This invertibility guarantees a bijective correspondence between the coordinate systems.

Change of Coordinates Formula

The change of coordinates formula arises from the equality of vector representations in different bases. Consider a vector \mathbf{v} in a finite-dimensional V, expressed with respect to basis B = \{\mathbf{b}_1, \dots, \mathbf{b}_n\} as \mathbf{v} = B [\mathbf{v}]_B, where B is the matrix whose columns are the basis vectors \mathbf{b}_i (assuming coordinates in a standard basis), and [\mathbf{v}]_B is the coordinate column vector. Similarly, with respect to basis C = \{\mathbf{c}_1, \dots, \mathbf{c}_n\}, \mathbf{v} = C [\mathbf{v}]_C. Equating these gives B [\mathbf{v}]_B = C [\mathbf{v}]_C, so [\mathbf{v}]_C = C^{-1} B [\mathbf{v}]_B. The change of basis matrix P_{C \leftarrow B} = C^{-1} B thus transforms coordinates from B to C: [\mathbf{v}]_C = P_{C \leftarrow B} [\mathbf{v}]_B. Inversely, [\mathbf{v}]_B = P_{B \leftarrow C} [\mathbf{v}]_C, where P_{B \leftarrow C} = B^{-1} C = (P_{C \leftarrow B})^{-1}, confirming the transformation is invertible since bases are linearly independent. This formula preserves fundamental properties: linear independence of sets (as coordinate transformations are bijective), and the of subspaces (unchanged under basis change). For inner product spaces with orthonormal bases, if the change matrix P is orthogonal (P^T P = I), it preserves inner products: \langle [\mathbf{v}]_B, [\mathbf{w}]_B \rangle = \langle [\mathbf{v}]_C, [\mathbf{w}]_C \rangle, maintaining geometric structure like angles and lengths. A key corollary concerns linear operators: the matrix A_B of operator T: V \to V with respect to basis B relates to A_C by similarity transformation A_B = P^{-1} A_C P, where P = P_{C \leftarrow B}, ensuring eigenvalues and characteristic polynomial are basis-independent. For numerical verification, consider n=2, B = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, C = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, \mathbf{v} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}, so [\mathbf{v}]_B = \begin{pmatrix} 1 \\ 1 \end{pmatrix}. Then P = C^{-1} B = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix}, and [\mathbf{v}]_C = P [\mathbf{v}]_B = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, recoverable by inverse as P^{-1} [\mathbf{v}]_C = \begin{pmatrix} 1 \\ 1 \end{pmatrix}, confirming the formula.

Infinite-Dimensional Extensions

Hilbert Space Example

In a separable Hilbert space H, every vector v \in H admits a unique representation with respect to a countable \{e_n\}_{n=1}^\infty, given by the infinite series v = \sum_{n=1}^\infty \langle v, e_n \rangle e_n, where \langle \cdot, \cdot \rangle denotes the inner product and the series converges in the norm topology of H. The "coordinate vector" of v is then the infinite sequence (\langle v, e_1 \rangle, \langle v, e_2 \rangle, \dots ) \in \ell^2, which belongs to the space of square-summable sequences since H is separable. This sequence fully determines v via the basis expansion, extending the finite-dimensional notion of coordinates to infinite dimensions, though it is not represented as a finite but as a formal series or . Parseval's identity provides the key relation for these coordinates, stating that \|v\|_H^2 = \sum_{n=1}^\infty |\langle v, e_n \rangle|^2 whenever \{e_n\} is a complete orthonormal basis, ensuring the coordinate sequence captures the entire norm of v. For any orthonormal set (not necessarily complete), Bessel's inequality holds: \sum_{n=1}^\infty |\langle v, e_n \rangle|^2 \leq \|v\|_H^2, with equality if and only if the set is complete, guaranteeing convergence of the partial sums to v in the Hilbert space norm. Completeness of the basis thus ensures that the coordinate representation is both unique and exhaustive. A concrete example arises in the L^2[-\pi, \pi] of square-integrable functions on [-\pi, \pi] with respect to , equipped with the inner product \langle f, g \rangle = \int_{-\pi}^\pi f(x) \overline{g(x)} \, dx. The set \{e_n(x) = \frac{1}{\sqrt{2\pi}} e^{i n x} \}_{n=-\infty}^\infty forms a complete , and for any f \in L^2[-\pi, \pi], the coordinates are the Fourier coefficients c_n = \langle f, e_n \rangle = \frac{1}{\sqrt{2\pi}} \int_{-\pi}^\pi f(x) e^{-i n x} \, dx, yielding the expansion f(x) = \sum_{n=-\infty}^\infty c_n e_n(x) in L^2 norm. These coefficients satisfy \int_{-\pi}^\pi |f(x)|^2 \, dx = \sum_{n=-\infty}^\infty |c_n|^2, and applies to partial sums of the series. This framework illustrates how coordinate vectors in facilitate analysis of continuous functions through discrete sequences.

Sequence Space Example

The space \ell^2 is the Hilbert space of square-summable complex sequences, equipped with the inner product \langle x, y \rangle = \sum_{n=1}^\infty x_n \overline{y_n} for x = (x_1, x_2, \dots) and y = (y_1, y_2, \dots), where the norm is defined by \|x\|^2 = \sum_{n=1}^\infty |x_n|^2 < \infty. It admits a standard orthonormal basis \{e_n\}_{n=1}^\infty, where each e_n is the sequence with 1 in the nth position and 0 elsewhere, satisfying \langle e_m, e_n \rangle = \delta_{mn} (the Kronecker delta). Any vector v \in \ell^2 has a unique coordinate representation with respect to this basis, given by v = \sum_{n=1}^\infty v_n e_n, where the coordinates are the sequence components (v_1, v_2, \dots) themselves, and the series converges in the \ell^2 if and only if \sum_{n=1}^\infty |v_n|^2 < \infty. For instance, consider v = \sum_{n=1}^\infty \frac{1}{n^2} e_n; here, the coordinates are v_n = \frac{1}{n^2} for n \geq 1, and v \in \ell^2 since \sum_{n=1}^\infty \left| \frac{1}{n^2} \right|^2 = \sum_{n=1}^\infty \frac{1}{n^4} = \frac{\pi^4}{90} < \infty, with \|v\|^2 = \frac{\pi^4}{90}. The coefficients satisfy , \|v\|^2 = \sum_{n=1}^\infty |v_n|^2, ensuring the expansion captures the full of v. Changes of basis in \ell^2 preserve the Hilbert space structure when induced by unitary operators, which map orthonormal bases to orthonormal bases. For example, on \ell^2(\mathbb{Z}) (the bi-infinite analog), the discrete Fourier transform acts as a unitary operator from \ell^2(\mathbb{Z}) to L^2([0,1]), mapping the standard basis \{e_n\}_{n \in \mathbb{Z}} to the exponential orthonormal basis \{e^{2\pi i n t}\}_{n \in \mathbb{Z}}. The of the coordinate expansion in \ell^2 mirrors the finite-dimensional case but requires the additional condition that the sequence lies in \ell^2 for , as guaranteed by the completeness of the .

References

  1. [1]
    Coordinate vector - StatLect
    A coordinate vector is an array of coefficients representing a vector as a linear combination of a basis, in a finite-dimensional space.Motivation · Definition · Addition of coordinate vectors · Multiplication of coordinate...
  2. [2]
    [PDF] 4.4 Coordinate Systems - UC Berkeley math
    The coordinate mapping in Theorem 8 is an important example of an isomorphism from V onto Rn. In general, a 1-1 linear transformation from a vector space V onto ...<|control11|><|separator|>
  3. [3]
    Bases as Coordinate Systems
    A basis acts as a coordinate system on a subspace, where a vector's coordinates are its coefficients in a linear combination using the basis vectors.
  4. [4]
    3.2 Bases and coordinate systems - Understanding Linear Algebra
    A basis is a set of vectors that spans and is linearly independent. A basis forms a coordinate system, and a vector's coordinates in a basis are weights in a  ...
  5. [5]
    [PDF] Math 2331 – Linear Algebra - 4.4 Coordinate Systems
    Coordinates of a vector x relative to a basis β are the weights c1,...,cn such that x = c1b1 + ... + cnbn. The coordinate vector is [x]β =    c1 ... cn   .
  6. [6]
    The Curious History of Vectors and Tensors - SIAM.org
    Sep 3, 2024 · The idea of a vector as a mathematical object in its own right first appeared as part of William Rowan Hamilton's theory of quaternions.
  7. [7]
    Hermann Grassmann (1809 - 1877) - Biography - MacTutor
    Hermann Grassmann is chiefly remembered for his development of a general calculus for vectors. Thumbnail of Hermann Grassmann View three larger pictures ...
  8. [8]
    6.1 Coordinate vectors and isomorphisms
    Before we can define coordinate vectors we need to define an ordered basis. As the name suggests this is nothing more than a basis along with a particular ...
  9. [9]
    [PDF] MATH 323 Linear Algebra Lecture 21: Isomorphism of vector spaces ...
    The coordinate mapping v 7→ (x1,x2,...,xn) establishes a one-to-one correspondence between V and Fn. This correspondence is linear (hence an isomorphism).
  10. [10]
    [PDF] MATH 304 Linear Algebra Lecture 14: Basis and coordinates ...
    Given a vector v ∈ R2, let (x,y) be its standard coordinates, i.e., coordinates with respect to the standard basis e1 = (1,0), e2 = (0,1), and let. (x′,y′) be ...
  11. [11]
    3.2 Bases and coordinate systems - Understanding Linear Algebra
    A set of vectors in R m is called a basis for R m if the set of vectors spans R m and is linearly independent.
  12. [12]
    [PDF] Coordinate Vectors and Examples
    A coordinate vector [v]B is formed by the scalars x1,...,xn, where v = x1v1 + x2v2 + ... + xnvn, and the xj's are the coordinates of v relative to B.Missing: definition | Show results with:definition
  13. [13]
    [PDF] Introduction to Linear Algebra Jason R. Wilson
    This book covers matrix operations, solving linear systems, vector spaces, span, linear independence, basis, dimension, linear maps, and diagonalizable ...
  14. [14]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    Sheldon Axler received his undergraduate degree from Princeton University, followed by a PhD in mathematics from the University of California at Berkeley.Missing: transition | Show results with:transition
  15. [15]
    Change of basis | Formula, examples, proofs - StatLect
    The change of basis is a technique that allows us to express vector coordinates with respect to a new basis that is different from the old basis.The change-of-basis matrix · Inverse of the change-of-basis... · Linear operators
  16. [16]
    [PDF] The change of Basis Formula for the Coordinates of a Vector
    Definition. The change of basis matrix from B to C written PC←−B is the matrix whose columns are the “old basis vectors”, that is, the vectors in B.
  17. [17]
    [PDF] Math 217: Summary of Change of Basis and All That...
    LIKEWISE, THE A-MATRIX AND B-MATRIX OF A LINEAR TRANSFORMATION T ARE RELATED BY. SB→A [T]B = [T]ASB→A. 6. Here is the answer if you got stuck: We first see what ...
  18. [18]
    Orthogonal Transformations and Orthogonal Matrices - UTSA
    Jan 29, 2022 · In linear algebra, an orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product.
  19. [19]
    [PDF] Orthogonal transformations - Stanford University
    Note that orthogonal maps preserve inner products, which is why they are important in the inner product space setting.Missing: algebra | Show results with:algebra
  20. [20]
    [PDF] Unit 5: Change of Coordinates
    Change of coordinates involves writing a vector in a new basis using a coordinate change matrix, and is used to figure out the matrix of a transformation.
  21. [21]
    [PDF] Worksheet 19: Change of basis
    We see that the matrices of T in two different bases are similar. In particular, if V = Rn, C is the canonical basis of Rn (given by the columns of the n × n ...
  22. [22]
    [PDF] 18.102 S2021 Lecture 15. Orthonormal Bases and Fourier Series
    Apr 13, 2021 · So if we have an orthonormal basis, every element can be expanded in this series in terms of the orthonormal basis elements. And thus every ...
  23. [23]
    [PDF] Hilbert spaces
    ... (Bessel's inequality). ... In view of the following result, a maximal orthonormal se- quence in a separable Hilbert space will be called an orthonormal basis; it ...
  24. [24]
    [PDF] Chapter 7: Fourier Series - UC Davis Math
    If a function is defined on the interval [0, π], then we may extend it to an even or an odd 2π periodic function on “. The original function may therefore be ...
  25. [25]
    [PDF] Section 16.3. Bessel's Inequality and Orthonormal Bases
    Mar 5, 2017 · An orthonormal sequence {ϕk} in a Hilbert space H is complete provided the only vector h ∈ H that is orthogonal to every ϕk is h = 0. Lemma 16.3 ...
  26. [26]
    [PDF] EE 261 - The Fourier Transform and its Applications
    ... L2 Approximation by Finite Fourier Series ... In fact, one way of getting from Fourier series to the Fourier transform is ...
  27. [27]
    [PDF] C. Heil, A Basis Theory Primer, Expanded Edition, Birkhäuser ...
    Then {en} is an orthonormal basis for ℓ2, often called the standard basis for ℓ2. (b) Consider H = L2[0,1], the space of functions that are square ...
  28. [28]
    [PDF] 1 Fourier transform as unitary equivalence
    1 We'll see that the Fourier transform is a unitary operator F : L2(R) → L2(R) that diagonalizes shifts U1(a) : L2(R) → L2(R), U1(a)f : t 7→ f(t + a); namely,