Fact-checked by Grok 2 weeks ago

Transpose

In linear algebra, the transpose of a matrix A, denoted A^T, is the matrix obtained by interchanging the rows and columns of A, effectively reflecting the matrix over its . For an m \times n matrix A = (a_{ij}), the transpose A^T is an n \times m matrix where each entry (A^T)_{ji} = a_{ij}. This operation preserves the structure of linear transformations and is fundamental to concepts such as inner products and norms in vector spaces. The transpose exhibits several key properties that underpin its utility in and applications. Notably, the transpose of a transpose recovers the original : (A^T)^T = A. It interacts predictably with other matrix operations; for instance, the transpose of a product is the product of the transposes in reverse order: (AB)^T = B^T A^T. Additionally, the transpose of a sum is the sum of the transposes: (A + B)^T = A^T + B^T. These properties ensure that the transpose operation is an and a on the space of matrices. In the context of real matrices, the transpose is sufficient for many applications, but for complex matrices, the (or ), denoted A^* or A^H, extends the concept by also taking the conjugate of each entry: (A^*)_{ji} = \overline{a_{ij}}. This variant is crucial in areas like and , where it defines operators corresponding to observable quantities. Symmetric matrices, which satisfy A = A^T, and orthogonal matrices, satisfying A^T A = I, exemplify the transpose's role in preserving symmetries and isometries. Beyond , the transpose finds applications in for efficient data representation, such as in algorithms where it facilitates operations on feature vectors, and in physics for formulating tensor equations in . Its computational implementation is straightforward, often requiring O(mn) time for an m \times n matrix, making it a basic primitive in numerical libraries like .

Transpose of a Matrix

Definition

In linear algebra, the transpose of a is an operation that interchanges its rows and columns, effectively reflecting the matrix over its . For an m \times n A, the transpose, denoted A^T, is the n \times m obtained by the rows of A with its columns, preserving the total number of elements while altering the dimensions unless A is square. The elements of the transpose are defined such that the entry in the i-th row and j-th column of A^T equals the entry in the j-th row and i-th column of A: (A^T)_{ij} = A_{ji} for all indices i = 1, \dots, n and j = 1, \dots, m. This notation using the superscript T is standard and applies to both square matrices, where A and A^T share the same dimensions, and rectangular matrices, where transposition changes the shape from m \times n to n \times m. This matrix-specific operation generalizes to the transpose of linear maps between vector spaces, providing a foundation for more abstract algebraic structures.

Examples

The transpose operation can be illustrated through straightforward examples that demonstrate how rows and columns are interchanged, providing intuition for its effect on various matrix types. Consider a row , which is a 1×3 matrix \begin{pmatrix} 1 & 2 & 3 \end{pmatrix}. Its transpose is the column \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}, effectively switching the single row into a single column. For a square 2×2 A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, the transpose A^T is obtained by interchanging the off-diagonal elements, yielding A^T = \begin{pmatrix} 1 & 3 \\ 2 & 4 \end{pmatrix}. This example highlights how the element in position (1,2) moves to (2,1) and vice versa. A non-square further clarifies the dimensional swap. Take the 2×3 B = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{pmatrix}; its transpose B^T becomes the 3×2 \begin{pmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{pmatrix}, where the first row of B forms the first column of B^T, the second row forms the second column, and so on. Special cases include the and the . The transpose of any zero matrix, such as the 2×2 zero matrix \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}, remains the zero matrix itself, as all entries are unchanged under row-column interchange. Similarly, the 2×2 identity matrix I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} is symmetric, so its transpose I^T = I. Visually, the transpose reflects the matrix over its , transforming rows into columns and columns into rows; for instance, in the 2×2 example above, the structure pivots such that horizontal elements become vertical and .

Properties

The transpose operation exhibits several fundamental algebraic properties that underpin its utility in linear algebra. One key property is that the transpose is an , meaning applying it twice returns the original : for any A, (A^T)^T = A. This follows directly from the definition using ; if A = (a_{ij}), then (A^T)_{ij} = a_{ji}, so ((A^T)^T)_{ij} = (A^T)_{ji} = a_{ij}. The transpose is also linear with respect to and . Specifically, for matrices A and B of compatible dimensions and scalars a, b, (aA + bB)^T = aA^T + bB^T. To see this, consider the entry-wise definition: the (i,j)-entry of the left side is a a_{ji} + b b_{ji}, while the right side is a (A^T)_{ij} + b (B^T)_{ij} = a a_{ji} + b b_{ji}, matching by the definitions of transpose, , and . A matrix A is symmetric A = A^T, which characterizes matrices equal to their own . This property identifies a class of matrices invariant under , with the diagonal elements unchanged and off-diagonal elements mirrored across the . For square matrices, the —the sum of the diagonal elements—is invariant under : \operatorname{tr}(A^T) = \operatorname{tr}(A). This holds because the diagonal entries of A^T are the same as those of A, as (A^T)_{ii} = a_{ii} for each i. The of a square is likewise preserved: \det(A^T) = \det(A). A proof using the Leibniz formula shows that the sums over with , and corresponds to reversing the permutation order, which preserves the overall and value. Alternatively, since any is a product of elementary matrices and reverses their order without altering the product , the equality follows. Finally, the of a matrix equals the of its transpose: \operatorname{rank}(A^T) = \operatorname{rank}(A). This is because the column space of A has the same as the row space of A, and the column space of A^T is precisely the row space of A; thus, the dimensions match.

Transpose of Products

One fundamental property of the matrix transpose concerns the interaction with : for matrices A (of size m \times n) and B (of size n \times p) that are compatible for multiplication, the transpose of their product satisfies (AB)^T = B^T A^T. This reverses the order of the factors upon , reflecting how the rows and columns are interchanged in the multiplication process. To establish this using the definition of matrix multiplication and transpose, consider the (i,j)-entry of (AB)^T, which equals the (j,i)-entry of AB: [(AB)^T]_{ij} = [AB]_{ji} = \sum_{k=1}^n a_{jk} b_{ki}. The corresponding entry of B^T A^T is [B^T A^T]_{ij} = \sum_{k=1}^n [B^T]_{ik} [A^T]_{kj} = \sum_{k=1}^n b_{ki} a_{jk}, which matches after reindexing, confirming the equality. This property extends to products of more than two matrices by repeated application: for compatible matrices A, B, and C, (ABC)^T = C^T B^T A^T. In general, the transpose of a product of k matrices reverses the order of the transposed factors. For invertible matrices, the transpose interacts with inversion as follows: if A is invertible, then (A^{-1})^T = (A^T)^{-1}. This follows from multiplying both sides of A A^{-1} = I by transposes, yielding A^{-T} A^T = I^T = I, where the superscript -T denotes the inverse transpose. A special case arises with vectors, where the dot product x^T y (a scalar) equals y^T x, since the transpose of a scalar is itself and the order reverses under the product rule. In applications, such as quadratic forms, the expression x^T A x represents a scalar that is real-valued when A is symmetric (A = A^T), as (x^T A x)^T = x^T A^T x = x^T A x. This symmetry ensures the form is well-defined for real vectors and underpins its use in optimization and physics.

Computer Implementation

Computing the transpose of a matrix in programming environments involves algorithms that rearrange elements while considering efficiency, storage formats, and hardware constraints. For square matrices of size n \times n, an in-place transposition algorithm swaps elements across the , iterating over the upper triangle and exchanging A with A for i < j, achieving O(n^2) with approximately n^2/2 swaps and O(1) extra space. This approach avoids allocating additional but requires careful indexing to handle the cycles, as the transposition corresponds to a of indices. For non-square matrices of size m \times n, transposition typically requires an out-of-place approach, creating a new array and copying elements by swapping indices, such that the new matrix B = A for all i, j, resulting in O(mn) time and O(mn) space. In-place methods for rectangular matrices are more complex, often involving block decompositions or temporary storage to resolve overlapping cycles, but they are less common due to the space asymmetry. A key challenge in matrix transposition is cache efficiency, particularly in row-major storage formats common in languages like C and Python, where accessing columns for transposition leads to poor spatial locality and frequent cache misses. In column-major formats, such as Fortran, row access during transposition suffers similarly. Optimized algorithms mitigate this by using blocking or tiling to improve data reuse, transposing sub-blocks that fit in cache lines to achieve near-optimal bandwidth. Major numerical libraries provide built-in functions for . In , np.transpose(a) returns a with permuted axes for multidimensional arrays, equivalent to transpose for 2D cases without copying data unless necessary. MATLAB's transpose A.' computes the nonconjugate transpose, interchanging rows and columns while preserving complex elements. The BLAS standard lacks a dedicated transposition routine but supports transpose operations via flags in level-3 routines like (e.g., TRANS='T' for transposed input), with implementations often using optimized copies for explicit transposes. Special cases require tailored approaches. For sparse matrices in compressed formats like CSR, transposition involves swapping row and column indices and resorting the index arrays, often converting to in O(NNZ + m + n) time, where NNZ is the number of nonzeros. (Note: Saad's book on iterative methods discusses sparse operations; specific algo from implementations.) On GPUs, kernels optimize transposition using tiling to coalesce global memory accesses, achieving up to 80-90% of peak bandwidth for large matrices by handling bank conflicts and warp divergence. Historically, early implementations in the 1960s-1970s stored matrices in column-major order, making transposes straightforward via index swaps but prompting the development of BLAS in for portable, efficient linear algebra, evolving to level-3 routines in the 1980s that incorporated transpose handling for .

Transpose in Linear Algebra

Transpose of Linear Maps

In the context of linear algebra over vector spaces, the transpose of a provides a natural extension of the matrix transpose concept to mappings between arbitrary vector spaces, leveraging the structure of dual spaces. Given vector spaces V and W over a F, and a T: V \to W, the transpose (or map) T^*: W^* \to V^* is defined by the relation \langle T^* \phi, v \rangle = \langle \phi, T v \rangle for all linear functionals \phi \in W^* and vectors v \in V, where \langle \cdot, \cdot \rangle denotes the duality pairing between a space and its dual.\] This construction ensures that $T^*$ is the unique [linear map](/page/Linear_map) that preserves the bilinear pairings induced by $T$, making the definition independent of any choice of bases and applicable to both finite- and infinite-dimensional settings.\[ In the finite-dimensional case, suppose \dim V = n and \dim W = m. Selecting bases \{v_1, \dots, v_n\} for V and \{w_1, \dots, w_m\} for W, the matrix of T with respect to these bases has entries determined by the coordinates of T v_j in the w-basis. The matrix of T^* with respect to the corresponding dual bases \{v^1, \dots, v^n\} for V^* (where v^i(v_j) = \delta_{ij}) and \{w^1, \dots, w^m\} for W^* is precisely the transpose of the matrix of T.\] This correspondence highlights how the transpose operation on matrices arises as a special instance of this more general construction when $V = W = F^{n}$ with the [standard basis](/page/Standard_basis). Moreover, properties such as $\rank(T) = \rank(T^*)$ hold, reflecting the preservation of [kernel](/page/Kernel) and [image](/page/Image) dimensions under duality.\[ A example illustrates this in the space of polynomials. Consider the differentiation D: \mathcal{P}(\mathbb{R}) \to \mathcal{P}(\mathbb{R}), where \mathcal{P}(\mathbb{R}) is the of all polynomials with real coefficients and D p = p'. The transpose D^*: \mathcal{P}(\mathbb{R})^* \to \mathcal{P}(\mathbb{R})^* acts on functionals; for instance, if \phi(p) = p(a) is evaluation at a point a, then D^* \phi (p) = p'(a), while for the functional \phi(p) = \int_0^1 p(t) \, dt, we have D^* \phi (p) = p(1) - p(0).\] In a suitable basis for the [dual space](/page/Dual_space), such as the [monomial basis](/page/Monomial_basis) adjusted by factorials (e.g., $\{x^k / k!\}$), the action of $D^*$ corresponds to multiplication by $x$ up to scaling factors, demonstrating how the transpose shifts the "degree" in the [dual representation](/page/Dual_representation).\[ For advanced readers familiar with category theory, the assignment T \mapsto T^* exemplifies a contravariant functor from the category of vector spaces over F (with linear maps as morphisms) to itself, reversing arrows while preserving the duality structure.$$]

Transpose of Bilinear Forms

In linear algebra, a bilinear form B: V \times W \to F on vector spaces V and W over a field F is a map that is linear in each argument separately. The transpose of B, denoted B^*, is the bilinear form B^*: W \times V \to F defined by [ B^*(w, v) = B(v, w) for all $v \in V$ and $w \in W$.[](https://www.math.cmu.edu/~wn0g/noll/2ch2a.pdf) When $V$ and $W$ are finite-dimensional, choose bases $\{e_i\}$ for $V$ and $\{f_j\}$ for $W$. The matrix representation of $B$ is the matrix $A$ with entries $A_{ij} = B(e_i, f_j)$, so that $B(v, w) = ^T A $ in coordinates. The transpose $B^*$ then has matrix representation $A^T$, since B^*(w, v) = ^T A^T . [](https://www.math.uci.edu/~brusso/bilinearYafaev.pdf) If $V = W$, then $B$ is called symmetric if $B = B^*$, or equivalently, $B(v, w) = B(w, v)$ for all $v, w \in V$. In this case, the matrix $A$ satisfies $A = A^T$.[](https://kconrad.math.uconn.edu/blurbs/linmultialg/bilinearform.pdf) Symmetric bilinear forms on $V$ are fundamental in the study of [quadratic forms](/page/Quadratic_form). For such a $B$, the associated quadratic form is $Q(v) = B(v, v)$ for $v \in V$. In fields of characteristic not 2, every quadratic form $Q$ determines a unique symmetric bilinear form via the [polarization identity](/page/Polarization_identity) B(v, w) = \frac{1}{4} \bigl( Q(v + w) - Q(v - w) \bigr). [](https://www.math.uci.edu/~brusso/bilinearYafaev.pdf) A canonical example is the standard dot product on $\mathbb{R}^n$, given by $B(v, w) = v^T w$. This is symmetric since $B(v, w) = B(w, v)$, so $B^* = B$, with matrix $A = I_n$. The associated quadratic form is $Q(v) = \|v\|^2$.[](https://kconrad.math.uconn.edu/blurbs/linmultialg/bilinearform.pdf) ### Adjoint Operators In Hilbert spaces, the adjoint operator provides a generalization of the transpose that accounts for the inner product structure and complex conjugation. For a bounded linear operator $T: H \to H$ on a Hilbert space $H$, the adjoint $T^*$ is the unique operator satisfying $\langle T u, v \rangle = \langle u, T^* v \rangle$ for all $u, v \in H$, where $\langle \cdot, \cdot \rangle$ denotes the inner product.[](https://heil.math.gatech.edu/handouts/adjoint_hilbert.pdf) Over real Hilbert spaces, the adjoint coincides with the transpose of the operator when represented in an orthonormal basis. In complex Hilbert spaces, the adjoint corresponds to the conjugate transpose, denoted $A^* = \overline{A}^T$, which involves transposing the matrix representation and taking the complex conjugate of each entry.[](https://books.physics.oregonstate.edu/LinAlg/adjoint.html) A self-adjoint operator satisfies $T = T^*$, and in the finite-dimensional case, these correspond to Hermitian matrices, which have exclusively real eigenvalues.[](https://math.utep.edu/faculty/sewell/tamu/640/notes5.pdf) Unitary operators, defined by $U^* U = I$ where $I$ is the identity, preserve the inner product and thus the norm of vectors, i.e., $\|U x\| = \|x\|$ for all $x \in H$. The spectral theorem states that every self-adjoint operator on a separable Hilbert space is unitarily diagonalizable, meaning there exists a unitary operator $U$ and a multiplication operator by a real-valued function (the spectral measure) such that $T = U M_\lambda U^*$, with an orthonormal basis of eigenvectors.[](https://people.math.ethz.ch/~kowalski/spectral-theory.pdf) In infinite-dimensional settings, such as quantum mechanics, the momentum operator $p = -i \hbar \frac{d}{dx}$ on $L^2(\mathbb{R})$ is self-adjoint when defined on the dense domain of smooth functions with compact support, ensuring real eigenvalues and observables consistent with physical measurements.[](http://sporadic.stanford.edu/conformal/lecture1.pdf)

References

  1. [1]
    3.1: The Matrix Transpose - Mathematics LibreTexts
    Sep 17, 2022 · The transpose of a matrix is an operator that flips a matrix over its diagonal. Transposing a matrix essentially switches the row and column ...
  2. [2]
    transpose - PlanetMath.org
    Mar 22, 2013 · The transpose of a matrix A A is the matrix formed by “flipping” A A about the diagonal line from the upper left corner.<|control11|><|separator|>
  3. [3]
    2.5: The Transpose - Mathematics LibreTexts
    Sep 16, 2022 · For a matrix A , we denote the transpose of A by A T . Before formally defining the transpose, we explore this operation on the following matrix ...Definition 2 . 5 . 1 : The... · Lemma 2 . 5 . 1 : Properties of...
  4. [4]
    Conjugate Transpose -- from Wolfram MathWorld
    The conjugate transpose of an m×n matrix A is the n×m matrix defined by where A^(T) denotes the transpose of the matrix A and A^_ denotes the conjugate matrix.
  5. [5]
    Hermitian Matrix -- from Wolfram MathWorld
    A square matrix is called Hermitian if it is self-adjoint. Therefore, a Hermitian matrix is defined as one for which. (1) where denotes the conjugate transpose.
  6. [6]
    Symmetric Matrix -- from Wolfram MathWorld
    A symmetric matrix is a square matrix that satisfies A^(T)=A, (1) where A^(T) denotes the transpose, so a_(ij)=a_(ji). This also implies A^(-1)A^(T)=I, ...
  7. [7]
    Orthogonal Matrix -- from Wolfram MathWorld
    A n×n matrix A is an orthogonal matrix if AA^(T)=I, where A^(T) is the transpose of A and I is the identity matrix.
  8. [8]
    6.2: Orthogonal Complements and the Matrix Tranpose
    Jun 18, 2024 · The transpose is a simple algebraic operation performed on a matrix. The next activity explores some of its properties. Activity 6.2.4. In Sage, ...
  9. [9]
  10. [10]
    The transpose of a matrix - Math Insight
    The transpose of a matrix is simply a flipped version of the original matrix. We can transpose a matrix by switching its rows with its columns.
  11. [11]
    Transpose of a matrix - an overview | ScienceDirect Topics
    The transpose of a matrix is defined as a new matrix obtained by turning its rows into columns. For example, if a matrix is of size m×n, its transpose will be ...<|control11|><|separator|>
  12. [12]
    [PDF] Transpose & Dot Product Extended Example
    The transpose of an m x n matrix A is the n x m matrix AT, where the columns of AT are the rows of A. For example, if A = 1 2 3 4 5 6, then AT = ⎡ ⎣ 1 4 2 5 3 ...
  13. [13]
    [PDF] ICS 6N Computational Linear Algebra Matrix Algebra
    Feb 2, 2017 · The zero matrix is a matrix in which all entries are zero, written as 0. ... Given an mxn matrix A, the transpose of A is the nxm matrix,.<|control11|><|separator|>
  14. [14]
    [PDF] Math 2270 - Lecture 11: Transposes and Permutations
    The transpose of a matrix is the matrix you get when you switch the rows and the columns. For example, the transpose of. (1 23. 2 1 4 is the matrix. /1 2. (21.
  15. [15]
    [PDF] Interactive Linear Algebra
    Solve systems of linear equations using matrices, row reduction, and inverses. • Analyze systems of linear equations geometrically using the geometry of ...
  16. [16]
    [PDF] Linear Algebra - UC Davis Mathematics
    In broad terms, vectors are things you can add and linear functions are functions of vectors that respect vector addition. The goal of this text is to.
  17. [17]
    [PDF] Properties of Matrix Arithmetic
    Suppose A and B are matrices which are compatible for multiplication. Then. (AB)T = BT AT . Proof. I'll derive this using the matrix multiplication formula. (AB) ...
  18. [18]
    [PDF] the proofs for property (5) of matrix - Purdue Math
    Prove (AB)T = BT AT . Here A is an m × n matrix and B is of size n × p. Proof. Let C = [cij]=(AB)T . Then cij = (AB)ji = n. X k=1 ajkbki = n. X k=1. aT.
  19. [19]
    [PDF] The Matrix Cookbook
    Nov 15, 2012 · If A is positive definite and B is symmetric, then A−tB is positive definite for sufficiently small t. 9.6.13 Hadamard inequality. If A is a ...
  20. [20]
    [PDF] Homework 7 - Linear algebra II Spring 2013 - OSU Math
    (a) Prove that (AB)t = BtAt and that Att = A. (b) Prove that if A is invertible then (A−1)t = (At)−1. Exercise 9. Prove that the inverse of an invertible ...
  21. [21]
    PfHP The dot product (inner product) - UT Computer Science
    permalinkThe notation xTy x T y comes from the fact that the dot product also equals the result of multiplying 1×n 1 × n matrix xT x T times n×1 n × 1 matrix y.
  22. [22]
    [PDF] Quadratic Forms Q(x) = xTAx where A is symmetric. - Example
    A symmetric matrix A is indefinite if and only if the xT Ax has both positive and negative values. if and only if A are positive and negative eigenvalues. A ...
  23. [23]
    [PDF] A Decomposition for In-place Matrix Transposition
    Feb 15, 2014 · In this paper, we show how the in-place transposition problem can be decomposed into independent row-wise and column-wise permutations. By ...
  24. [24]
    [PDF] Cache-Efficient Matrix Transposition - CSE, IIT Delhi
    There are two differences between this algorithm and the cache-oblivious one. First, the layout function of the matrix is Morton-ordered rather than row-major.
  25. [25]
    [PDF] Efficient Out-of-core and Out-of-place Rectangular Matrix ... - Hal-Inria
    Oct 8, 2020 · In this paper, we propose an efficient solution to perform in-memory or out-of-core rectangular matrix transposition and rotation by using an ...
  26. [26]
  27. [27]
    Transpose vector or matrix - MATLAB - MathWorks
    This MATLAB function returns the nonconjugate transpose of A, that is, interchanges the row and column index for each element.Description · Examples · Input Arguments
  28. [28]
    [PDF] Basic Linear Algebra Subprograms Technical (BLAST) Forum ...
    Aug 21, 2001 · The BLAST forum standard is a Basic Linear Algebra Subprograms Technical standard. It includes an introduction, motivation, and organization of ...
  29. [29]
    An Efficient Matrix Transpose in CUDA C/C++ | NVIDIA Technical Blog
    Feb 18, 2013 · I will optimize a matrix transpose to show how to use shared memory to reorder strided global memory accesses into coalesced accesses.
  30. [30]
    [PDF] basic linear algebra subprograms for fortran usage
    The BLAS package includes subprograms for dot products, elementary vector operations, Givens transformations, vector copy/swap, vector norms, scaling, and ...
  31. [31]
    [PDF] Duality, Bilinearity
    Pitfall: For a bilinear mapping B whose codomain is not R, the linear mapping corresponding to the switch B∼ is not the same as the transpose. B⊤ of the linear ...<|control11|><|separator|>
  32. [32]
    [PDF] Further linear algebra. Chapter V. Bilinear and quadratic forms.
    Definition 1.1 Let V be a vector space over k. A bilinear form on V is a function f : V × V → k such that. • f(u + λv, w) = f(u, w) + λf(v, w);.
  33. [33]
    [PDF] BILINEAR FORMS The geometry of Rn is controlled algebraically by ...
    A linear transformation L: V → W between two finite-dimensional vector spaces over. F can be written as a matrix once we pick (ordered) bases for V and W. When ...<|control11|><|separator|>
  34. [34]
    [PDF] functional analysis lecture notes: adjoints in hilbert spaces
    A complex n × n matrix A is self-adjoint if and only if it is Hermitian, i.e., if. A = AH. Exercise 1.13. Show that every self-adjoint operator is normal. Show ...
  35. [35]
    Hermitian Adjoint - BOOKS
    If all the elements of a matrix are real, its Hermitian adjoint and transpose are the same. In terms of components, (Aij)†=A∗ji.
  36. [36]
    [PDF] Notes on Chapter 5 1. If H is Hermitian (H = H ∗), all eigenvalues of ...
    If H is Hermitian (H = H∗), all eigenvalues of H are real. Proof: if Hz = λz, then λz∗z = z∗(λz) = z∗(Hz)=(H∗z)∗z = (Hz)∗z = (λz)∗z = ¯λz∗z.
  37. [37]
    [PDF] Spectral theory in Hilbert spaces (ETH Zürich, FS 09) E. Kowalski
    We can now prove Theorem 3.1 for self-adjoint operators. Theorem 3.17 (Spectral theorem for self-adjoint operators). Let H be a separable. Hilbert space and ...
  38. [38]
    [PDF] Lecture 1: Review of Quantum Mechanics
    Jan 1, 2020 · Every classical observable A has a QM counterpart CA that is a. Hermitian (self-adjoint) operator. Before discussing QM we discuss operators.