Fact-checked by Grok 2 weeks ago

Transpose of a linear map

In linear algebra, the transpose of a linear map, also referred to as the dual map or , is a construction that assigns to each linear transformation T: [V](/page/V.) \to [W](/page/W) between vector spaces over a [K](/page/K) a corresponding linear transformation T^t: W^* \to V^* between their dual spaces, defined by (T^t \phi)(v) = \phi(T v) for all \phi \in W^* and v \in V, where V^* and W^* denote the spaces of linear functionals on V and W, respectively. This ensures that T^t preserves the of linear maps, as it is itself linear: for any \phi_1, \phi_2 \in W^* and scalars c \in K, T^t(\phi_1 + c \phi_2) = T^t \phi_1 + c T^t \phi_2. Key properties of the transpose highlight its role in preserving dimensions and relating kernels and images across dual spaces. Specifically, the null space of T^t is the annihilator of the range of T, denoted \ker T^t = (\operatorname{im} T)^0, and the range of T^t is the annihilator of the kernel of T, so \operatorname{im} T^t = (\ker T)^0. As a consequence, the rank of T equals the rank of T^t, i.e., \operatorname{rank} T = \operatorname{rank} T^t, which extends the well-known equality of row and column ranks for matrices. Additionally, the transpose reverses composition: for linear maps S: U \to V and T: V \to W, (S T)^t = T^t S^t, and it commutes with addition and scalar multiplication in an analogous manner. When V and W are finite-dimensional, the transpose corresponds directly to the under choice of bases. If T has A with respect to bases for V and W, then T^t has matrix A^t (the of A) with respect to the bases for W^* and V^*. This connection is fundamental in applications such as solving systems of linear equations, where the transpose arises in least-squares problems and orthogonal projections, and in more abstract settings like and . In the context of inner product spaces, the transpose coincides with the operator when identifying a space with its via the inner product, though the pure transpose does not require an inner product structure.

Definition and Foundations

Dual Spaces

The dual space of a V over a F, denoted V^*, is the set of all linear functionals on V, that is, all linear maps \phi: V \to F. These linear functionals, also called covectors, assign to each in V an element of the F in a linear manner. The dual space V^* itself forms a over F, with pointwise addition of functionals defined by (\phi + \psi)(v) = \phi(v) + \psi(v) for \phi, \psi \in V^* and v \in V, and by (\alpha \phi)(v) = \alpha \phi(v) for \alpha \in F. The in V^* is the zero functional that maps every vector in V to $0 \in F. The dual space V^* corepresents the space of linear maps from V to F, meaning it parametrizes all such maps. A key aspect is the evaluation map, which associates to each v \in V the functional \mathrm{ev}_v \in (V^*)^* defined by \mathrm{ev}_v(\phi) = \phi(v) for \phi \in V^*; for finite-dimensional V, this induces a natural isomorphism V \cong (V^*)^*. While the algebraic dual V^* is defined for any vector space without additional structure, the topological dual consists of continuous linear functionals and forms a subspace of V^* when V is equipped with a topology; the algebraic dual is the focus here unless otherwise specified. For example, if V = F^n, then V^* is isomorphic to the space of row vectors in F^n, where each functional acts via the dot product: \phi(x_1, \dots, x_n) = a_1 x_1 + \dots + a_n x_n for coefficients a_i \in F.

Definition of the Transpose Map

Let V and W be vector spaces over a F, and let T: V \to W be a . The (or map) of T, denoted T^*: W^* \to V^*, is the between the dual spaces defined by (T^* \phi)(v) = \phi(T v) for all \phi \in W^* and v \in V, where V^* = \mathrm{Hom}_F(V, F) and W^* = \mathrm{Hom}_F(W, F). To verify that T^* is linear, consider \phi, \psi \in W^* and \alpha \in F. For any v \in V, (T^*(\phi + \psi))(v) = (\phi + \psi)(T v) = \phi(T v) + \psi(T v) = (T^* \phi)(v) + (T^* \psi)(v), and (T^*(\alpha \phi))(v) = (\alpha \phi)(T v) = \alpha (\phi(T v)) = \alpha (T^* \phi)(v). Thus, T^* preserves addition and scalar multiplication, confirming its linearity. The notation T^* is standard in many treatments, though alternatives such as T^t or T^\vee appear in some sources. This construction generalizes to the setting of modules over a R: for an R- T: V \to W between R-modules, the transpose T^\vee: W^\vee \to V^\vee is defined by pre-composition, T^\vee(\phi) = \phi \circ T for \phi \in W^\vee = \mathrm{Hom}_R(W, R). However, the primary focus here is on vector spaces. As an example, consider finite-dimensional spaces V = F^m and W = F^n, where elements of V^* and W^* may be identified with row vectors in F^{1 \times m} and F^{1 \times n}, respectively. In this identification, T^* maps row vectors via pre-composition with T.

Algebraic Properties

Linearity and Composition

The transpose of a inherits from the original map. Suppose T: V \to W is a linear map between vector spaces over the field \mathbb{F}, and T^*: W^* \to V^* denotes its transpose, defined by (T^* \psi)(v) = \psi(T v) for all \psi \in W^* and v \in V. To verify , consider the sum of functionals: for \psi_1, \psi_2 \in W^*, ((T^* (\psi_1 + \psi_2))(v) = (\psi_1 + \psi_2)(T v) = \psi_1(T v) + \psi_2(T v) = (T^* \psi_1)(v) + (T^* \psi_2)(v) = (T^* \psi_1 + T^* \psi_2)(v), which holds for all v \in V, so T^*(\psi_1 + \psi_2) = T^* \psi_1 + T^* \psi_2. Similarly, for and c \in \mathbb{F}, (T^* (c \psi))(v) = (c \psi)(T v) = c \, \psi(T v) = c (T^* \psi)(v) = (c T^* \psi)(v), confirming T^* is linear. The transpose reverses the order under composition. Let S: W \to U and T: V \to W be linear maps, with transposes S^*: U^* \to W^* and T^*: W^* \to V^*. The composition rule states (S \circ T)^* = T^* \circ S^*. To prove this, evaluate the defining action on an arbitrary \phi \in U^* and v \in V: (((S \circ T)^* \phi)(v) = \phi((S \circ T) v) = \phi(S (T v)) = (S^* \phi)(T v) = (T^* (S^* \phi))(v). Thus, (S \circ T)^* \phi = T^* (S^* \phi) for all \phi, establishing the equality of maps. This reversal arises naturally from the contravariant nature of the functor. The map's transpose is itself the on the . For the \mathrm{Id}_V: V \to V, its transpose satisfies (\mathrm{Id}_V^* \psi)(v) = \psi(\mathrm{Id}_V v) = \psi(v) for all \psi \in V^* and v \in V, so \mathrm{Id}_V^* = \mathrm{Id}_{V^*}. This follows directly from the and holds over any . Invertibility is preserved under transposition. If T: V \to W is invertible (i.e., bijective), then T^*: W^* \to V^* is also invertible, with inverse (T^*)^{-1} = (T^{-1})^*. To see surjectivity of T^*, take any \psi \in V^* and define \phi = \psi \circ T^{-1} \in W^*; then T^* \phi (v) = \phi(T v) = \psi(T^{-1} (T v)) = \psi(v). For injectivity, if T^* \phi = 0, then \phi(T v) = 0 for all v \in V, so \phi(w) = 0 for all w \in W (since T is surjective), hence \phi = 0. The inverse relation follows by composing the definitions: (T^{-1})^* T^* \psi (v) = T^* \psi (T^{-1} v) = \psi(T (T^{-1} v)) = \psi(v), and similarly for the other order. This property underscores the duality between V and V^* for finite-dimensional spaces.

Dimension and Rank Relations

In finite-dimensional vector spaces V and W over a K, the dual spaces V^* and W^* are isomorphic to V and W, respectively, so \dim V^* = \dim V and \dim W^* = \dim W. For a T: V \to W, the T^*: W^* \to V^* satisfies \dim \operatorname{Im} T^* = \dim \operatorname{Im} T and \dim \ker T^* = \dim W - \dim \operatorname{Im} T. This follows from the rank-nullity theorem applied to both T and T^*: \dim V = \rank T + \nullity T and \dim W^* = \rank T^* + \nullity T^*, combined with the duality of dimensions and \rank T^* = \rank T, yielding \nullity T^* = \dim W - \rank T. The equality of ranks arises from explicit relations between the kernels and images via annihilators. The annihilator of a subspace S \subset U is the subspace S^0 = \{ f \in U^* \mid f(s) = 0 \ \forall s \in S \}. For the transpose, \ker T^* = (\operatorname{Im} T)^0 \subset W^* and \operatorname{Im} T^* = (\ker T)^0 \subset V^*. In finite dimensions, \dim (\operatorname{Im} T)^0 = \dim W - \dim \operatorname{Im} T, so \dim \ker T^* = \dim W - \dim \operatorname{Im} T; similarly, \dim (\ker T)^0 = \dim V - \dim \ker T = \rank T, so \dim \operatorname{Im} T^* = \rank T. These annihilator identifications hold in arbitrary dimensions, providing structural relations even when dimensions are infinite, though the numerical equalities rely on finite-dimensionality. A related relation involves the , defined for T: V \to W as \coker T = W / \operatorname{Im} T. Dually, \coker T^* = V^* / \operatorname{Im} T^* = V^* / (\ker T)^0. By the for dual spaces, the natural map V^* \to (V / \ker T)^* has kernel (\ker T)^0, yielding V^* / (\ker T)^0 \cong (V / \ker T)^*. Since T induces V / \ker T \cong \operatorname{Im} T, it follows that \coker T^* \cong (\operatorname{Im} T)^*. This duality preserves the structure of the original map's image in the dual setting and holds generally, without assuming finite dimensions. In infinite dimensions, the rank equality \rank T^* = \rank T (understood as the dimension of the image) follows from the above isomorphisms: \operatorname{Im} T^* \cong (V / \ker T)^* \cong (\operatorname{Im} T)^*, so the image of T^* is dual to the image of T. While dimensions may differ (e.g., \dim (\operatorname{Im} T)^* \geq \dim \operatorname{Im} T with strict inequality possible for infinite \dim \operatorname{Im} T), the structural equivalence via dualization maintains the rank relation in this generalized sense. As an illustrative example, consider a nilpotent linear map T: V \to V on a finite-dimensional space, meaning T^k = 0 for some minimal index k but T^{k-1} \neq 0. The transpose T^* is also nilpotent with the same index k, since the Jordan canonical form of T^* consists of the transposed Jordan blocks of T, preserving the sizes of the nilpotent blocks and thus the nilpotency index.

Geometric and Set-Theoretic Properties

Polars

In the context of dual vector spaces, the polar of a subset A \subseteq V of a V over a F is defined as the set
A^0 = \{ \phi \in V^* \mid \phi(a) = 0 \ \forall \, a \in A \} \subseteq V^*,
where V^* denotes the algebraic of V, consisting of all linear functionals from V to F. This construction provides a dual analog to the , capturing the functionals that vanish on every element of A. The polar A^0 is always a of V^*, and it depends only on the of A, since A^0 = (\operatorname{span} A)^0.
The of A, denoted (A^0)^0, is the polar of A^0 taken in the double dual (V^*)^*. Algebraically, (A^0)^0 always contains \operatorname{span} A, as any element of \operatorname{span} A annihilates precisely the functionals in A^0. In more general settings, such as topological vector spaces, (A^0)^0 coincides with the of \operatorname{span} A under suitable topologies on V. However, when V is finite-dimensional, reflexivity ensures exact : (A^0)^0 = \operatorname{span} A. This follows from the natural between V and its double dual V^{**}, which identifies elements of V with evaluation functionals on V^*. For a T: V \to W between vector spaces, the polar of the \operatorname{Im} T relates directly to the (or dual map) T^*: W^* \to V^*, defined by (T^* \psi)(v) = \psi(T v) for \psi \in W^* and v \in V. Specifically, the of T^* is the polar of \operatorname{Im} T:
\ker T^* = (\operatorname{Im} T)^0 = \{ \psi \in W^* \mid \psi(T v) = 0 \ \forall \, v \in V \}.
This identity highlights how the encodes information about the functionals vanishing on the range of T.
In the finite-dimensional case over \mathbb{R}, with V = \mathbb{R}^n equipped with its standard dual V^* \cong \mathbb{R}^n via the \langle [x, y](/page/X&Y) \rangle = x^T y (identifying functionals with row vectors), the polar of a U \subseteq \mathbb{R}^n is precisely its U^\perp = \{ y \in \mathbb{R}^n \mid x^T y = 0 \ \forall \, x \in U \}. For instance, if U is the xy-plane in \mathbb{R}^3, spanned by (1,0,0) and (0,1,0), then U^0 = U^\perp is the z-axis, spanned by (0,0,1). This identification bridges algebraic duality with .

Annihilators

In the context of dual spaces, the of a U \subseteq V is defined as the set \Ann(U) = \{ \phi \in V^* \mid \phi(u) = 0 \ \forall \, u \in U \}, which coincides with the polar U^0 when U is a . This set forms a of the V^*, and for finite-dimensional spaces over a , its satisfies \dim \Ann(U) = \dim V - \dim U. A key relation connects annihilators to quotient spaces via duality: the restriction map V^* \to U^*, given by \phi \mapsto \phi|_U, has kernel \Ann(U) and is surjective in the finite-dimensional case, yielding a natural V^* / \Ann(U) \cong U^* by the first . Equivalently, the dual of the satisfies (V/U)^* \cong \Ann(U), identifying functionals on the with those vanishing on U. For a T: V \to W, the T^*: W^* \to V^* interacts with annihilators such that the \Im T^* = \Ann(\Ker T) \subseteq V^*. If T is surjective, then T^* is injective and \Ker T^* = \Ann(\Im T); conversely, the focus on \Ann(\Ker T) highlights how the captures functionals sensitive only to the image of T. The operation preserves exactness in sequences but reverses the direction of the arrows. Specifically, a short exact sequence of vector spaces $0 \to U \to V \to W \to 0 induces a short exact sequence of dual spaces $0 \to W^* \to V^* \to U^* \to 0 via the maps. As an example, consider es, where applying the transpose to the boundary maps dualizes the spaces and reverses the arrows, transforming a into a cochain complex (or vice versa).

Duals of Subspaces and Quotients

Consider a V over a F and a U \subseteq V. The inclusion map i: U \to V induces a map i^*: V^* \to U^*, defined by i^*(\phi) = \phi \circ i for \phi \in V^*, which is the restriction of \phi to U. The kernel of i^* is the annihilator \operatorname{Ann}(U) = \{\phi \in V^* \mid \phi(u) = 0 \ \forall u \in U\}. By the first isomorphism theorem, this yields the isomorphism U^* \cong V^* / \operatorname{Ann}(U). Now consider the quotient space V/U and the canonical projection \pi: V \to V/U. The transpose \pi^*: (V/U)^* \to V^* is given by \pi^*(\psi) = \psi \circ \pi for \psi \in (V/U)^*. This map is injective, and its image is precisely \operatorname{Ann}(U), since \pi^*(\psi) vanishes on U. Thus, (V/U)^* \cong \operatorname{Ann}(U) \subseteq V^*. When V admits a direct sum decomposition V = U \oplus W with W \cong V/U, the dual space decomposes as V^* \cong U^* \oplus W^* \cong U^* \oplus (V/U)^*, where the summands consist of functionals vanishing on W and on U, respectively. In finite-dimensional settings or with an inner product on V, a complement W can be chosen such that the decomposition is orthogonal with respect to the induced duality.

Representations

Matrix Representation

In finite-dimensional vector spaces over a \mathbb{F}, the transpose of a linear map T: \mathbb{F}^m \to \mathbb{F}^n admits a concrete when bases are chosen for the domain and . Specifically, if [T]_{B,C} denotes the matrix of T with respect to an ordered basis B = \{b_1, \dots, b_m\} for \mathbb{F}^m and C = \{c_1, \dots, c_n\} for \mathbb{F}^n, then the matrix of the transpose map T^*: (\mathbb{F}^n)^* \to (\mathbb{F}^m)^* with respect to the dual bases C^* = \{c_1^*, \dots, c_n^*\} and B^* = \{b_1^*, \dots, b_m^*\} (where c_i^*(c_j) = \delta_{ij} and similarly for B^*) is the [T]_{B,C}^t. Under the standard identification of \mathbb{F}^n with column vectors and (\mathbb{F}^n)^* with row vectors (via the basis to the \{e_i\}, where e_i^*(e_j) = \delta_{ij}), the action of T^* on a row vector \ell is given by \ell \cdot [T]_{B,C}, transforming linear functionals as the original map transforms vectors via [T]_{B,C}. When changing bases, suppose P is the invertible change-of-basis matrix for the domain (columns of P are the new basis vectors in old coordinates) and Q for the codomain. The matrix of T^* with respect to the induced dual bases is then (Q^{-1})^t [T]_{B,C}^t P^t. As an example, consider a rotation by angle \theta in \mathbb{R}^2 with respect to the standard basis, represented by the matrix \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}. The transpose is \begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix}, which equals the inverse (and represents a rotation by -\theta), as rotation matrices are orthogonal. This matrix correspondence is directly implemented in numerical computing libraries, where the matrix transpose operation realizes the action of the dual map under the standard vector-functional identification.

Coordinate-Free Description

The transpose of a between finite-dimensional vector spaces over a k, often denoted T^*, provides a basis-independent that defines a contravariant from the \mathbf{Vect}_k of vector spaces over k to its (\mathbf{Vect}_k)^{\mathrm{op}}. Specifically, for a T: V \to W, the T^*: W^* \to V^* is defined by (T^* \phi)(v) = \phi(T v) for all \phi \in W^* and v \in V, where V^* = \mathrm{Hom}_k(V, k) is the of V. This assignment reverses the direction of morphisms, sending T: V \to W to T^*: W^* \to V^*. The functoriality of the implies naturality: commutative diagrams in \mathbf{Vect}_k correspond to commutative diagrams in (\mathbf{Vect}_k)^{\mathrm{op}} under the . For instance, given linear maps U \xrightarrow{f} V \xrightarrow{g} W, the maps satisfy (g \circ f)^* = f^* \circ g^*, so the sequence W^* \xrightarrow{g^*} V^* \xrightarrow{f^*} U^* reverses the original chain U \to V \to W. This reversal preserves exactness in finite dimensions, reflecting the contravariant nature without reliance on coordinates. From a category-theoretic perspective in the of vector spaces, the (-)^*: \mathbf{Vect}_k \to (\mathbf{Vect}_k)^{\mathrm{op}} serves as the right to the that identifies \mathbf{Vect}_k with (\mathbf{Vect}_k)^{\mathrm{op}} by reversing arrows algebraically, characterized by the \mathrm{Hom}_{\mathbf{Vect}_k}(V, W) \cong \mathrm{Hom}_{(\mathbf{Vect}_k)^{\mathrm{op}}} (W^*, V^*). Algebraically, this adjunction underscores the universal property of the as the representing object for linear functionals. A key tensor arises under the canonical (V \otimes U)^* \cong V^* \otimes U^* for finite-dimensional spaces: the of the induced T \otimes \mathrm{id}_U: V \otimes U \to W \otimes U is identified with T^* \otimes \mathrm{id}_{U^*}: W^* \otimes U^* \to V^* \otimes U^*, preserving the structure of tensor products functorially. As an illustrative example, the duality between the k-th exterior power \bigwedge^k V and its \bigwedge^k V^* is realized via the acting on spaces of alternating multilinear ; specifically, a T: V \to W induces T^{[\wedge k]}: \bigwedge^k V \to \bigwedge^k W whose is the natural \bigwedge^k W^* \to \bigwedge^k V^*, establishing the contravariant functoriality on exterior algebras.

Relation to the Hermitian Adjoint

The Hermitian adjoint of a bounded linear T: H \to K between complex Hilbert spaces H and K is the unique bounded linear T^\dagger: K \to H satisfying \langle T^\dagger y, x \rangle_H = \langle y, T x \rangle_K for all x \in H, y \in K, where the inner product is sesquilinear (linear in the first argument and conjugate linear in the second). The transpose T^t of a is defined purely algebraically as the dual map T^t: K^* \to H^* between dual spaces, without reference to any inner product structure, via (T^t \phi)(x) = \phi(T x) for \phi \in K^*, x \in H. In the absence of an inner product, the transpose remains an algebraic construct on the dual spaces. However, when inner products are present on H and K, the identifies each space with its dual via a conjugate linear \iota_H: H \to H^* given by \iota_H(x)(\cdot) = \langle \cdot, x \rangle_H, allowing the algebraic transpose to relate to the as T^t \cong \iota_H \circ T^\dagger \circ \iota_K^{-1}, where the conjugation arises from the sesquilinearity of the inner product. Over numbers, with the standard inner product (which is bilinear), coincides with , as there is no complex conjugation involved and the Riesz map is linear. In the complex case, however, the matrix representation of the T^\dagger with respect to orthonormal bases is the A^* of the matrix A of T, whereas the algebraic corresponds to the plain A^t. For example, consider a U: H \to H on a , which satisfies U^\dagger U = I, implying U^\dagger = U^{-1}. In contrast, the algebraic transpose U^t generally does not equal U^{-1}, as it lacks the conjugation; over the reals, unitary operators are orthogonal and U^t = U^{-1}.

in Inner Product Spaces

In inner product spaces, particularly , the concept of the transpose of a linear map extends naturally to the adjoint operator through the structure of the inner product. The provides the key that identifies the of a with itself. Specifically, for a H over the real or complex numbers, every continuous linear functional \phi \in H^* can be uniquely represented as \phi(y) = \langle y, x \rangle for some x \in H, where \langle \cdot, \cdot \rangle denotes the inner product; this establishes an anti-linear J: H \to H^* given by J_x(y) = \langle y, x \rangle. Given a bounded linear T: H \to K between Hilbert spaces H and K, the algebraic transpose T^t: K^* \to H^* is defined as before. Using the Riesz maps J_H: H \to H^* and J_K: K \to K^*, the T^\dagger: K \to H is then constructed as T^\dagger = J_H^{-1} \circ T^t \circ J_K, which ensures T^\dagger is also bounded. Equivalently, T^\dagger satisfies \langle T^\dagger y, x \rangle_H = \langle y, T x \rangle_K for all x \in H and y \in K. An T: H \to H is if T = T^\dagger, and if T T^\dagger = T^\dagger T. These generalize the notions of symmetric and normal matrices to dimensions. For bounded operators, the satisfies (S T)^\dagger = T^\dagger S^\dagger and (T^\dagger)^\dagger = T. A classic example is the T = \frac{d}{dx} on the L^2[a, b] with appropriate , such as absolutely continuous functions vanishing at the endpoints. Its is T^\dagger = -\frac{d}{dx} on a suitable ensuring the terms vanish in , reflecting the formal computation via the inner product.

Applications

Functional Analysis

In the context of functional analysis, the transpose of a linear map extends naturally to infinite-dimensional Banach spaces, where continuity plays a central role. Consider Banach spaces X and Y, and a bounded linear operator T: X \to Y. The transpose T^*: Y^* \to X^* is defined by (T^* \lambda)(x) = \lambda(Tx) for all \lambda \in Y^* and x \in X, where Y^* and X^* denote the continuous dual spaces. This operator T^* is itself bounded, with operator norm satisfying \|T^*\| = \|T\|. The transpose interacts significantly with the weak and weak* topologies on Banach spaces and their duals. Specifically, T^* is continuous when Y^* and X^* are equipped with the weak* topology, which is the coarsest topology making all evaluation maps \lambda \mapsto \lambda(y) for y \in Y continuous. This weak* continuity arises because the defining relation for T^* preserves the duality pairing under limits in the weak* sense. In Hilbert spaces, which are a special case of reflexive Banach spaces, the transpose coincides with the adjoint defined via the inner product. Reflexivity further refines these properties. A X is reflexive if the canonical embedding \iota: X \to X^{**}, given by \iota(x)(\lambda) = \lambda(x) for \lambda \in X^*, is surjective (hence bijective, as it is always ). In this case, for any bounded T: X \to Y, the double transpose satisfies T^{**} = T via the identification X \cong X^{**} and Y \cong Y^{**}. This identification ensures that reflexivity preserves the structure of the under bidualization. The closed graph theorem provides implications for the transpose regarding closedness. A linear operator between Banach spaces has a closed graph if and only if it is bounded, and for densely defined operators, the (or adjoint in Hilbert spaces) is always closed. Consequently, T is closed if and only if T^* is closed, with the of T^* being the "transpose" of the of T in the product space of . Moreover, the image of T is closed if and only if the image of T^* is weak* closed in X^*. A concrete example illustrates these concepts in the Hilbert space L^2(\mathbb{R}). The Fourier transform F: L^2(\mathbb{R}) \to L^2(\mathbb{R}), defined initially on Schwartz functions and extended by density, is a bounded unitary operator. Its transpose, which coincides with the Hilbert adjoint F^*, satisfies F^* = F^{-1} up to conjugation in the kernel (specifically, F^* f(\xi) = \overline{F \overline{f}(-\xi)}, but unitarity implies F^{-1} = F^* under the Plancherel theorem). This demonstrates how the transpose recovers the inverse in this reflexive setting, preserving the L^2 norm via \|Ff\|_2 = \|f\|_2 = \|F^* f\|_2.

Optimization and Duality

In , the of the represented by the constraint matrix A is central to the formulation of the problem. For the problem of minimizing \mathbf{c}^\top \mathbf{x} subject to A \mathbf{x} \geq \mathbf{b} and \mathbf{x} \geq \mathbf{0}, the is to maximize \mathbf{b}^\top \mathbf{y} subject to A^\top \mathbf{y} \leq \mathbf{c} and \mathbf{y} \geq \mathbf{0}, where A^\top is the of A. This structure emerges from the L(\mathbf{x}, \mathbf{y}) = \mathbf{c}^\top \mathbf{x} + \mathbf{y}^\top (\mathbf{b} - A \mathbf{x}), with the variables \mathbf{y} acting as shadow prices that interpret the constraint as balancing the objective coefficients. Strong duality ensures that the and optimal values are equal under suitable conditions, such as , which posits the existence of a strictly feasible point in the relative interior of the feasible set. For convex programs satisfying , the zero allows solutions to be recovered from optima via complementary slackness, with the facilitating and economic interpretations in . This equality holds because the saddle-point theorem aligns the infimum over variables with the supremum over variables when qualifications are met. In , the Fenchel conjugate f^*(\mathbf{y}) = \sup_{\mathbf{x}} \langle \mathbf{y}, \mathbf{x} \rangle - f(\mathbf{x}) extends duality, and the arises in rules for subdifferentials of compositions with . Specifically, for a f and linear map A, the subdifferential satisfies \partial (f \circ A)(\mathbf{x}) = A^\top \partial f(A \mathbf{x}), enabling the propagation of subgradients through transposed operators in proximal algorithms and conjugate computations. This relation underpins Fenchel-Rockafellar duality, where minimizers of f(\mathbf{x}) + g(A \mathbf{x}) correspond to zeros of the sum of conjugates involving A^\top. The Karush-Kuhn-Tucker (KKT) conditions for further emphasize the in stationarity requirements. For minimizing f(\mathbf{x}) subject to A \mathbf{x} \leq \mathbf{b}, the stationarity condition is \nabla f(\mathbf{x}^*) + A^\top \boldsymbol{\lambda}^* = \mathbf{0}, where \boldsymbol{\lambda}^* \geq \mathbf{0} are multipliers enforcing primal feasibility and complementary slackness. These conditions are necessary for local optimality under constraint qualifications like linear independence, and the transposed term balances the objective gradient against constraint directions. A key application appears in support vector machines (SVMs), where the dual formulation exploits properties of the kernel matrix derived from transposed inner products. The soft-margin SVM dual maximizes \sum_i \alpha_i - \frac{1}{2} \boldsymbol{\alpha}^\top Q \boldsymbol{\alpha} subject to $0 \leq \alpha_i \leq C and \sum_i \alpha_i y_i = 0, with Q_{ij} = y_i y_j K(\mathbf{x}_i, \mathbf{x}_j) and kernel K(\mathbf{u}, \mathbf{v}) = \phi(\mathbf{u})^\top \phi(\mathbf{v}); the K^\top = K ensures Q is , allowing efficient via kernel tricks without explicit feature maps. This dual, solved in the space of Lagrange multipliers, recovers the hyperplane as \mathbf{w} = \sum_i \alpha_i y_i \phi(\mathbf{x}_i), highlighting the transpose's in implicit high-dimensional computations.

Signal Processing and Control Theory

In , the transpose of a convolution operator, which represents filtering a signal with an h(t), is given by convolution with the time-reversed and complex-conjugated h^*(-t). This property arises because the operator in the of square-integrable functions preserves the inner product structure, leading to the interpretation where the transpose maximizes the for detection. For discrete-time signals, this manifests as the transpose of the lower-triangular associated with the , which becomes upper-triangular after transposition, effectively reversing the . A prominent example occurs in the discrete Fourier transform (DFT), where the transform is represented by the Vandermonde-like matrix F with entries F_{jk} = \omega^{(j-1)(k-1)} for \omega = e^{-2\pi i / n}. The inverse DFT is then F^{-1} = \frac{1}{n} F^H, where F^H denotes the of F, reflecting the unitary nature of the transform up to scaling. This relation enables efficient computation of the inverse via conjugation and transposition followed by normalization, underpinning algorithms in . In , the transpose plays a key role in analyzing linear -space of the form \dot{x}(t) = A x(t) + B u(t), y(t) = C x(t) + D u(t), where the (transpose) governs backward-time dynamics for and . The is \dot{p}(t) = -A^T(t) p(t) - C^T(t) v(t), with terminal condition p(t_f) = 0 and output z(t) = B^T(t) p(t) + D^T(t) v(t), where p(t) is the and v(t) drives the ; this formulation ensures \langle y, v \rangle = \langle u, z \rangle in appropriate inner products, facilitating duality between control and estimation problems. The also establishes duality between and : a system (A, B, C) is controllable if the of the controllability [B \, AB \, \cdots \, A^{n-1}B] is full, and equivalently, its (\tilde{A}, \tilde{B}, \tilde{C}) = (A^T, C^T, B^T) is observable if the observability [C^T \, (A^T)^T C^T \, \cdots \, ((A^T)^{n-1})^T C^T]^T has full , which is the of the original controllability . For infinite-horizon systems, the controllability Gramian W_c = \int_0^\infty e^{At} B B^T e^{A^T t} dt satisfies the A W_c + W_c A^T + B B^T = 0, and its observability Gramian W_o = \int_0^\infty e^{A^T t} C^T C e^{A t} dt uses the transposed dynamics, with W_c > 0 implying controllability if and only if W_o > 0 implies observability. An illustrative application appears in the for state estimation, where the fixed-interval smoother employs variables to refine estimates backward in time, incorporating future measurements to minimize variance. In recursive formulations, the (co-state) propagates innovations outward, solving a two-point boundary-value problem akin to , as seen in multi-link where spatial accelerations serve as adjoints to compute smoothed joint states from noisy observations.

References

  1. [1]
    [PDF] Lecture 2.5: The transpose of a linear map
    The transpose of a matrix is what results from swapping rows with columns. In our setting, we like to think about vectors in X as column vectors, and dual ...
  2. [2]
    [PDF] Transpose of a Linear Transformation - UMD MATH
    Mar 7, 2022 · 0.3 Definition of the Transpose. For a linear transformation T : V → W we can define a linear transform Tt : W∗ → V ∗ called the transpose ...
  3. [3]
    [PDF] Transpose : Linear Algebra Notes - Satya Mandal
    Sep 22, 2005 · T : V → W. be a linear trancformation. We define a map. Tt : W∗ → V ∗ by defining Tt(f) = foT : V → F for f ∈ W∗.
  4. [4]
    [PDF] Notes on Dual Spaces
    The dual space of V , denoted by V ∗, is the space of all linear functionals on V ; i.e. V ∗ := L(V,F).
  5. [5]
    [PDF] Dual space
    Mar 16, 2013 · The algebraic dual space is defined for all vector spaces. When defined for a topological vector space there is a subspace of this dual space, ...
  6. [6]
    Dual Vector Space -- from Wolfram MathWorld
    The dual vector space of a real vector space V is the vector space of linear functions f:V->R, denoted V^* and has the same dimension as V.
  7. [7]
    [PDF] Chapter 8 The Dual Space, Duality - UPenn CIS
    Definition 8.1. Given a vector space E, the vector space Hom(E,K) of linear maps from E to K is called the dual space (or dual) of E. The space Hom(E,K) is.
  8. [8]
    [PDF] DUAL MODULES 1. Introduction Let R be a commutative ring. For ...
    Section 5 describes the construction of the dual of a linear map between modules, which generalizes the matrix transpose. In Section 6 we will see how dual.
  9. [9]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    In contrast, this course will emphasize abstract vector spaces and linear maps. The title of this book deserves an explanation. Most linear algebra textbooks.
  10. [10]
    [PDF] Transposes, Change of Basis, Rank of a Matrix, Determinant
    Jan 27, 2025 · This implies that the rank of a matrix A is the same as its transpose AT . rank(A) = rank(AT ). 3. For a mapping T ∈ L(V,W), there exists a ...
  11. [11]
    [PDF] Fundamentals of Linear Algebra and Optimization CIS515, Some ...
    Jan 10, 2012 · 1.10 Transpose of a Linear Map and of a Matrix 118. 1.11 The ... the dual space (or dual) of E. The space Hom(E,K) is also denoted by ...
  12. [12]
    [PDF] Duality, part 2: Annihilators and the Matrix of a Dual Map
    Suppose V and W are finite-dimensional and T ∈ L(V,W). Then. T is surjective if and only if T/ is injective;. T is injective if and only if T/ is surjective ...
  13. [13]
    [PDF] Annihilators : Linear Algebra Notes - Satya Mandal
    Sep 21, 2005 · It is possible to mix up two annihilator of S. The first one is a subspace of the double dual V ∗∗ and the second one is the subspace of V.
  14. [14]
    [PDF] duality.pdf - John McCuan
    Apr 4, 2022 · The construction of L′ is always possible, but the isomorphisms Φ and Ψ may not be available to tie the dual map back to the original linear map ...
  15. [15]
    [PDF] Linear Algebra
    Page 1. Page 2. LINEAR ALGEBRA. KENNETH HOFFMAN. Professor of Mathematics. Massachusetts Institute of Technology. RAY KUNZE ... annihilator of S is the set So of ...
  16. [16]
    Induced Exact Sequence of Dual Spaces - Math Stack Exchange
    Oct 29, 2012 · The exactness of the first sequence means that S is injective, T surjective, and the range of S meets the kernel of T just the right way in V.Show that If a sequence is exact the dual ... - Math Stack ExchangeIf sequence of vector spaces is exact then the sequence of duals is ...More results from math.stackexchange.com
  17. [17]
    [PDF] Chapter 2: Linear Algebra User's Manual - Rutgers Physics
    Note especially that dualization “reverses arrows”: If. V. T. // W. (5.47) ... a chain complex we can produce a cochain complex. We discuss more about the ...
  18. [18]
    [PDF] Dual spaces, annihilators and subquotients - UCR Math Department
    In the proof of Proposition IV.2.4 on page 98, on lines 13–15 we use a relationship involving dual spaces and subquotients of a vector space.
  19. [19]
    Math 55a: Duality basics
    The dual of a quotient space V/U is naturally a subspace of V*, namely the annihilator of U in V*. If V has finite dimension n then so does V*. The dimension ...
  20. [20]
    Rotations and rotation matrices - IUCr Journals
    The transpose of a rotation matrix is its inverse. Premultiplying the rotation matrix R by its transpose RT (in which the columns of R become the rows of ...
  21. [21]
    [PDF] Functional Analysis and Operator Algebras - Portland State University
    May 9, 2022 · T 7→ T# (taking a linear map to its dual) is a contravariant functor from the category VEC of ... transpose is an involution on the unital ...
  22. [22]
    [PDF] Differential Forms - MIT Mathematics
    Feb 1, 2019 · Those 𝑘 tensors come up in fact in two contexts: as alternating 𝑘-tensors, and as exterior forms, i.e., ... Then the sequence of transpose maps. 𝑉 ...
  23. [23]
    [PDF] Introduction to the Tensor Product - UCSB Math
    The Dual Space and Dual Transformation. For completeness sake, if T ∈ L(V,W) then T : V → W and T is linear. The dual space of a vector.
  24. [24]
    [PDF] notes on tensor products and the exterior algebra - UMD MATH
    Feb 4, 2019 · The vector space V∗ is called the dual of V. The map T∗ is called the transpose or sometimes the adjoint of T. If V has a finite basis {vi}n.
  25. [25]
    [PDF] Bounded Linear Operators on a Hilbert Space - UC Davis Math
    A linear map on Cn with matrix A is self-adjoint if and only if A is Hermitian, meaning that A = A∗. Example 8.25 From Example 8.16, an integral operator K : L2 ...
  26. [26]
    [PDF] Representation theory and quantum mechanics tutorial Some linear ...
    ... linear map. Definition 1.2.1. A (Hermitian) adjoint of A is a linear map A ... transpose of A. This observation generalizes as follows. Theorem 1.2.2. A ...
  27. [27]
    [PDF] Linear Algebra Done Wrong Sergei Treil - Brown Math Department
    Determinants of a transpose and of a product. Determinants of elementary ... define the Hermitian adjoint of as operator in an inner product space. The.
  28. [28]
    [PDF] A primer of linear algebra - UGA math department
    T* is called the “transpose” or. “adjoint” or “dual” map of T. Since T ... Proposition: If T:V-->W is a linear map with transpose T*:W*-->V*, the kernel of.
  29. [29]
    [PDF] The Riesz Representation Theorem
    The following is called the Riesz Representation Theorem: Theorem 1 If T is a bounded linear functional on a Hilbert space H then there exists some g ∈ H ...Missing: source | Show results with:source
  30. [30]
    [PDF] functional analysis lecture notes: adjoints in hilbert spaces
    Definition 1.11. Let A ∈ B(H). (a) We say that A is self-adjoint or Hermitian if A = A∗. (b) We say that A is normal if AA∗ = A∗A. Example 1.12. A real ...
  31. [31]
    [PDF] Chapter 4 Linear Differential Operators
    One of the important properties of matrices, established in the appendix, is that a matrix that is self-adjoint, or Hermitian, may be diagonalized.
  32. [32]
    [PDF] FUNCTIONAL ANALYSIS - ETH Zürich
    Jun 8, 2017 · These are notes for the lecture course “Functional Analysis I” held by the second author at ETH Zürich in the fall semester 2015.
  33. [33]
    245B, Notes 6: Duality and the Hahn-Banach theorem - Terry Tao
    Jan 26, 2009 · The (continuous) dual space X^* of X is defined to be X^* := B( X ... X^* instead of the continuous dual (after putting a weak topology on ...
  34. [34]
    [PDF] 10 The Open Mapping Theorem and the Closed Graph Theorem
    We conclude that D is a closed linear operator. Another source of examples of closed operators is Hilbert-adjoint opera- tors: Definition (Hilbert adjoints).
  35. [35]
    [PDF] Math212a1406 The Fourier Transform The Laplace transform The ...
    Sep 18, 2014 · Since S is dense in L2(R) we conclude that the Fourier transform extends to unitary isomorphism of L2(R) onto itself. Shlomo Sternberg.
  36. [36]
    [PDF] Properties of the Fourier transform
    This equality between the L2 norms of a function and its Fourier transform is known as the Plancherel identity; it is a general fact about the Fourier transform ...
  37. [37]
    [PDF] Duality in Linear Programming - MIT
    These equations state that, if a decision variable of the primal is positive, then the corresponding constraint in the dual must hold with equality. Further, ...
  38. [38]
    [PDF] Lecture 8: Strong Duality - People @EECS
    Feb 9, 2012 · Strong duality holds when the duality gap is zero, meaning p* = d*. If a convex problem satisfies weak Slater's condition, strong duality holds.
  39. [39]
    [PDF] Topic 16: Fenchel conjugates
    The calculus of subdifferentials is not as nice, but it retains linearity and a weak version of the chain rule that applies to the composition of a convex and ...
  40. [40]
    (PDF) Time reversal and the spatio-temporal matched filter (L)
    PDF | It is known that focusing of an acoustic field by a time-reversal mirror (TRM) is equivalent to a spatio-temporal matched filter under conditions.
  41. [41]
    [PDF] 1 1.1. The DFT matrix.
    Jan 20, 2016 · It turns out that the matrix J is unitary, which by definition means that its inverse coincides with its conjugate transpose,. J−1 = J ...
  42. [42]
    [PDF] What Is the Adjoint of a Linear System? - Dennis S. Bernstein
    The adjoint of a linear system is a linear operator, like A* for A, that satisfies hAv, wi = hv,A∗wi, and is a different linear system.
  43. [43]
    [PDF] Lecture 5 Observability and state estimation
    controllability matrix of dual system is. ˜. C = [˜B. ˜. A. ˜. B ···. ˜. A n−1 ˜. B] ... transpose of observability matrix similarly we have. ˜. O = CT.
  44. [44]
    [PDF] \Kalman Filtering, Smoothing and Recursive Robot Arm Forward ...
    Dec 1, 1986 · This paper uses Kalman filtering and smoothing techniques to solve robot arm forward and inverse dynamics problems, using recursive techniques.