Fact-checked by Grok 2 weeks ago

Adjoint

In , the (or ) of a bounded linear T: H \to K between Hilbert spaces H and K is defined as the unique T^*: K \to H satisfying \langle T u, v \rangle_K = \langle u, T^* v \rangle_H for all u \in H and v \in K, where \langle \cdot, \cdot \rangle denotes the respective inner products. This concept generalizes the for matrices in finite-dimensional complex vector spaces, where if A is the matrix representation of T, then A^* = \overline{A}^T, the matrix obtained by taking the of each entry and then transposing. The adjoint plays a fundamental role in , enabling decompositions like the for compact operators and ensuring self-adjoint operators (where T = T^*) have real eigenvalues and orthogonal eigenspaces. The notion of the adjoint traces its origins to 18th-century calculus of variations, with early formulations appearing in Lagrange's 1760 memoir on differential equations, and was formalized in operator theory by in the early 20th century. In finite-dimensional linear algebra, a related but distinct concept is the adjugate (or classical adjoint) of a square A, defined as the transpose of its cofactor , satisfying A \cdot \operatorname{adj}(A) = \det(A) I, which is used to compute inverses via A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A). Beyond operators, the term "adjoint" denotes in : for functors F: \mathcal{C} \to \mathcal{D} and G: \mathcal{D} \to \mathcal{C}, F is left adjoint to G if there is a natural \operatorname{Hom}_\mathcal{D}(F(c), d) \cong \operatorname{Hom}_\mathcal{C}(c, G(d)) for all objects c \in \mathcal{C}, d \in \mathcal{D}, capturing universal approximations and appearing in examples like free groups and tensor-hom adjunctions. In , the of a \mathfrak{g} is the \operatorname{ad}_x: \mathfrak{g} \to \mathfrak{g} given by \operatorname{ad}_x(y) = [x, y], the Lie bracket, which encodes the structure of derivations and is crucial for . Across sciences and engineering, adjoints underpin applications such as in neural networks (via analogs), stability analysis in differential equations, and regularization in inverse problems, unifying theoretical and computational frameworks.

In Linear Algebra

Adjugate Matrix

The , also known as the classical adjoint, of a square A, denoted \operatorname{adj}(A), is defined as the of the cofactor of A. The cofactor C has entries C_{ij} = (-1)^{i+j} \det(M_{ij}), where M_{ij} is the obtained by deleting the i-th row and j-th column of A. To compute the adjugate, first determine the cofactor for each entry of A: calculate the determinant of the submatrix formed by removing the corresponding row and column, then apply the sign factor (-1)^{i+j}. Arrange these cofactors into the cofactor matrix C, and finally take its transpose to obtain \operatorname{adj}(A). For a 2×2 matrix A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, the cofactors are C_{11} = d, C_{12} = -c, C_{21} = -b, and C_{22} = a. The cofactor matrix is \begin{pmatrix} d & -c \\ -b & a \end{pmatrix}, so \operatorname{adj}(A) = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}. For a 3×3 matrix A = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 5 & 6 & 0 \end{pmatrix}, the cofactors include C_{11} = \det\begin{pmatrix} 1 & 4 \\ 6 & 0 \end{pmatrix} = -24, C_{12} = -\det\begin{pmatrix} 0 & 4 \\ 5 & 0 \end{pmatrix} = -(-20) = 20, and so on, yielding \operatorname{adj}(A) = \begin{pmatrix} -24 & 18 & 5 \\ 20 & -15 & -4 \\ -5 & 4 & 1 \end{pmatrix}. A fundamental property is that A \cdot \operatorname{adj}(A) = \operatorname{adj}(A) \cdot A = \det(A) \, I, where I is the identity matrix. To derive this, consider the (i,k)-entry of A \cdot \operatorname{adj}(A): \sum_{j=1}^n a_{ij} (\operatorname{adj}(A))_{jk} = \sum_{j=1}^n a_{ij} C_{kj}. By cofactor expansion, if i = k, this sum equals \det(A); if i \neq k, it equals the determinant of a matrix with two identical rows, which is zero. Thus, the product is \det(A) \, I. This property enables the computation of the matrix inverse for invertible matrices: if \det(A) \neq 0, then A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A). The adjugate was introduced in the as part of theory by mathematicians including . The adjugate is defined only for square matrices. When \det(A) = 0, the matrix A is singular, and A \cdot \operatorname{adj}(A) = 0, though \operatorname{adj}(A) itself may be nonzero for n > 1.

Hermitian Adjoint

The Hermitian adjoint of a A, denoted A^*, is defined as the of A. Specifically, if A = (a_{ij}) is an m \times n with entries a_{ij} \in \mathbb{C}, then A^* = (b_{ij}) where b_{ij} = \overline{a_{ji}} for all i, j, with \overline{\cdot} denoting the . This operation first the and then takes the conjugate of each entry, distinguishing it from the ordinary transpose A^T, which applies only to real matrices or omits conjugation for complex ones. Alternative notations include A^H or A^\dagger, particularly in physics contexts. Several key properties hold for the Hermitian adjoint, which can be verified using matrix multiplication. First, (A^*)^* = A: the (i,j)-entry of (A^*)^* is \overline{b_{ji}} = \overline{\overline{a_{ij}}} = a_{ij}, so it recovers A. Second, for matrices A and B of compatible dimensions, (AB)^* = B^* A^*: the (i,j)-entry of (AB)^* is \overline{(AB)_{ji}} = \overline{\sum_k a_{jk} b_{ki}} = \sum_k \overline{a_{jk}} \overline{b_{ki}} = \sum_k (A^*)_{kj} (B^*)_{ik}, which is the (i,j)-entry of B^* A^*. Third, (A + B)^* = A^* + B^*: the (i,j)-entry of (A + B)^* is \overline{(a_{ji} + b_{ji})} = \overline{a_{ji}} + \overline{b_{ji}}, matching the sum of the adjoints. Finally, for a complex scalar c, (cA)^* = \bar{c} A^*: the (i,j)-entry is \overline{c a_{ji}} = \bar{c} \overline{a_{ji}}. In the context of finite-dimensional inner product spaces, the Hermitian adjoint relates directly to the preservation of the inner product. Consider \mathbb{C}^n equipped with the standard Hermitian inner product \langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^* \mathbf{y} for column vectors \mathbf{x}, \mathbf{y} \in \mathbb{C}^n. For a A \in M_n(\mathbb{C}) acting as a , \langle A \mathbf{x}, \mathbf{y} \rangle = (A \mathbf{x})^* \mathbf{y} = \mathbf{x}^* A^* \mathbf{y} = \langle \mathbf{x}, A^* \mathbf{y} \rangle, verifying that the adjoint satisfies this defining relation. This ensures the adjoint is unique in finite dimensions. For example, consider the column A = \begin{pmatrix} 1 \\ i \\ -2i \end{pmatrix}; its is the row A^* = \begin{pmatrix} 1 & -i & 2i \end{pmatrix}, obtained by transposing and conjugating. Another example is the $2 \times 2 B = \begin{pmatrix} 1 & i \\ -5i & i \end{pmatrix}, with B^* = \begin{pmatrix} 1 & 5i \\ -i & i \end{pmatrix}. A significant application arises for Hermitian matrices (A = A^*), which are diagonalizable over \mathbb{C} with real eigenvalues; for instance, the \begin{pmatrix} 2 & 1-i \\ 1+i & 3 \end{pmatrix} has eigenvalues 1 and 4, both real.

In Functional Analysis

Adjoint Operator

In the context of , Hilbert spaces provide a natural setting for extending the notion of the from finite-dimensional spaces to infinite dimensions. A is a complete , and bounded linear operators between Hilbert spaces H and K are continuous linear maps T: H \to K satisfying \|T\| = \sup_{\|x\| \leq 1} \|T x\| < \infty. The adjoint operator T^*: K \to H of a bounded linear operator T: H \to K is defined by the relation \langle T x, y \rangle_K = \langle x, T^* y \rangle_H for all x \in H and y \in K, where \langle \cdot, \cdot \rangle_H and \langle \cdot, \cdot \rangle_K denote the inner products on H and K, respectively. The inner products are sesquilinear, and this relation defines T^* as a bounded linear operator. The existence and uniqueness of T^* for bounded T are guaranteed by the Riesz representation theorem, which states that every continuous linear functional on a Hilbert space is uniquely represented by an inner product with some fixed vector. Specifically, for fixed x \in H, the map y \mapsto \langle T x, y \rangle_K is a continuous linear functional on K, so there exists a unique z \in K such that \langle T x, y \rangle_K = \langle z, y \rangle_K; setting T^* y = z defines T^*. Linearity of T^* follows from linearity of the functionals. Explicit constructions of adjoints are available for certain classes of operators. For an integral operator T: L^2(\Omega) \to L^2(\Omega) defined by (T f)(x) = \int_\Omega k(x, y) f(y) \, dy with kernel k \in L^2(\Omega \times \Omega), the adjoint is the integral operator with kernel \overline{k(y, x)}, the complex conjugate transpose, satisfying \langle T f, g \rangle = \int_\Omega \overline{g(x)} \left( \int_\Omega k(x, y) f(y) \, dy \right) dx = \int_\Omega \int_\Omega \overline{k(y, x)} g(x) f(y) \, dx \, dy = \langle f, T^* g \rangle. For differential operators, which are typically unbounded and defined on dense subspaces (e.g., C_c^\infty(\mathbb{R}) in L^2(\mathbb{R})), the adjoint is constructed via integration by parts on the domain where boundary terms vanish, ensuring the domain of T^* is dense; for instance, the adjoint of d/dx on this domain is -d/dx. The adjoint operation satisfies several core properties, each provable using the defining inner product relation. First, (T^*)^* = T: for x \in H and z \in K, \langle T^* y, x \rangle_H = \overline{\langle y, T x \rangle_K} = \overline{\langle T x, y \rangle_K} = \langle x, T y \rangle_H by conjugate symmetry of the inner product, so (T^*)^* x = T x by uniqueness. Second, for bounded operators S: H \to H and T: H \to K, (S T)^* = T^* S^*: \langle S T x, y \rangle_K = \langle T x, S^* y \rangle_H = \langle x, T^* S^* y \rangle_H. Third, (T + S)^* = T^* + S^* follows from linearity: \langle (T + S) x, y \rangle_K = \langle T x, y \rangle_K + \langle S x, y \rangle_K = \langle x, T^* y \rangle_H + \langle x, S^* y \rangle_H = \langle x, (T^* + S^*) y \rangle_H. Additionally, for scalar \alpha \in \mathbb{C}, (\alpha T)^* = \overline{\alpha} T^*. For bounded T, the norm equality \|T\| = \|T^*\| holds: \|T^*\| \leq \|T\| by |\langle x, T^* y \rangle_H| = |\langle T x, y \rangle_K| \leq \|T\| \|x\| \|y\|, so \|T^* y\| \leq \|T\| \|y\|; equality follows by applying the same to T^{**} = T. Representative examples illustrate these concepts. On L^2(\Omega) with \Omega \subset \mathbb{R}^n measurable, the multiplication operator M_m f = m f by a bounded measurable function m: \Omega \to \mathbb{C} has adjoint M_{\overline{m}}, since \langle m f, g \rangle = \int_\Omega \overline{g} m f = \int_\Omega f \overline{\overline{m} g} = \langle f, \overline{m} g \rangle. The Fourier transform \mathcal{F}: L^2(\mathbb{R}) \to L^2(\mathbb{R}), defined initially on Schwartz functions and extended unitarily, satisfies \mathcal{F}^* = \mathcal{F}^{-1}, making it unitary (hence \|\mathcal{F}\| = \|\mathcal{F}^*\| = 1); up to a normalization factor of $2\pi, \mathcal{F} relates to its adjoint via Plancherel's theorem. For unbounded operators like the momentum operator -i d/dx on L^2(\mathbb{R}), the domain of the adjoint is the set of functions whose distributional derivative is in L^2(\mathbb{R}), which is dense in L^2(\mathbb{R}). A key spectral implication is that the spectrum \sigma(T) of T equals the complex conjugate \overline{\sigma(T^*)} of the spectrum of T^*, ensuring that non-real spectral points come in conjugate pairs.

Self-Adjoint Operator

In functional analysis, a densely defined linear operator T on a Hilbert space H is self-adjoint if it equals its adjoint, meaning T = T^* and the domain of T coincides with the domain of T^*, i.e., D(T) = D(T^*). This condition ensures that T is symmetric, satisfying \langle Tx, y \rangle = \langle x, Ty \rangle for all x, y \in D(T), and cannot be extended further while preserving this symmetry. Self-adjoint operators are central to quantum mechanics, where they represent observable quantities with real measurement outcomes. A symmetric operator T (satisfying T \subseteq T^*) is self-adjoint precisely when D(T) = D(T^*). Symmetric operators may fail to be self-adjoint if their domains are too restrictive, but many are essentially self-adjoint, meaning their closure \overline{T} is self-adjoint; this occurs if the deficiency subspaces \ker(T^* \pm iI) = \{0\}, ensuring a unique self-adjoint extension. Essential self-adjointness simplifies spectral analysis by guaranteeing a canonical self-adjoint realization without ambiguity in boundary conditions. Self-adjoint operators exhibit key properties: their spectrum \sigma(T) is real and non-empty, eigenvalues are real, and corresponding eigenspaces are orthogonal. They preserve the inner product in the sense that \langle Tx, y \rangle = \langle x, Ty \rangle for all x, y \in D(T), and bounded self-adjoint operators are normal, facilitating diagonalization. These traits ensure stability in applications like quantum dynamics. The spectral theorem provides a canonical decomposition for self-adjoint operators. For a bounded self-adjoint operator T, there exists a unique spectral measure E (projection-valued) on \mathbb{R} such that T = \int_{\sigma(T)} \lambda \, dE(\lambda), where the integral is with respect to the spectral resolution of the identity. This representation arises from the functional calculus: starting from the continuous functional calculus for normal operators, one constructs Borel functions f on \sigma(T) to define f(T) = \int f(\lambda) \, dE(\lambda), with the identity function yielding T itself; the proof relies on the Riesz representation theorem to build E from the resolvent (T - zI)^{-1} for z \notin \mathbb{R}, ensuring strong convergence and orthogonality of projections. For unbounded self-adjoint operators, the theorem extends via the same integral form, with the domain D(T) = \{ x \in H : \int |\lambda|^2 \, d\|E(\lambda)x\|^2 < \infty \}. Prominent examples include the position operator Q on L^2(\mathbb{R}), defined by (Q\phi)(x) = x \phi(x) with domain the Schwartz space \mathcal{S}(\mathbb{R}) or smooth compactly supported functions C_c^\infty(\mathbb{R}); it is self-adjoint but unbounded, with domain issues resolved by restricting to functions where x\phi \in L^2. The momentum operator P = -i \frac{d}{dx} on the same space, with domain C_c^\infty(\mathbb{R}), is essentially self-adjoint, requiring closure to achieve full self-adjointness due to boundary behavior at infinity. In quantum mechanics on bounded domains, the Laplacian -\Delta with Dirichlet boundary conditions (vanishing at the boundary) on L^2(\Omega) for bounded \Omega \subset \mathbb{R}^d is self-adjoint on the Sobolev domain H^2(\Omega) \cap H_0^1(\Omega), yielding a discrete negative spectrum for eigenvalues. For symmetric operators lacking self-adjointness, von Neumann's theorem characterizes extensions: if the deficiency indices n_\pm = \dim \ker(T^* \mp iI) are equal and finite, the self-adjoint extensions are parametrized by unitary operators from \ker(T^* - iI) to \ker(T^* + iI), constructing new domains D(T_U) = D(T) + \ker(T^* - iI) where T_U acts as T on D(T) and appropriately on the deficiency space. This framework, originally developed for , allows selection of physically relevant extensions, such as or for positive operators.

In Category Theory

Adjoint Functors

In category theory, categories consist of objects and morphisms between them, with composition and identity morphisms satisfying certain axioms, while functors are mappings between categories that preserve their structure. A pair of functors F: \mathcal{C} \to \mathcal{D} and G: \mathcal{D} \to \mathcal{C} form an adjunction, denoted F \dashv G, if there is a natural isomorphism \mathrm{Hom}_\mathcal{D}(F X, Y) \cong \mathrm{Hom}_\mathcal{C}(X, G Y) for all objects X in \mathcal{C} and Y in \mathcal{D}. Here, F is the left adjoint and G is the right adjoint, capturing a duality between constructions in the two categories. Equivalently, an adjunction is defined by natural transformations: the unit \eta: \mathrm{Id}_\mathcal{C} \to G F and the counit \epsilon: F G \to \mathrm{Id}_\mathcal{D}, which satisfy the triangle identities. These identities ensure compatibility: for any object X in \mathcal{C}, \begin{tikzcd} X \arrow[r, "\eta_X"] \arrow[dr, swap, "id_X"] & G F X \arrow[d, "G \epsilon_X"] \\ & X \end{tikzcd} \qquad \text{and} \qquad \begin{tikzcd} F G Y \arrow[r, "\epsilon_Y"] \arrow[dr, swap, "id_{F Y}"] & Y \arrow[d, "F \eta_Y"] \\ & F G Y \end{tikzcd}, where the composites equal the respective identities, verifying the bijection via the correspondence \phi: \mathrm{Hom}_\mathcal{D}(F X, Y) \to \mathrm{Hom}_\mathcal{C}(X, G Y) given by \phi(f) = G f \circ \eta_X and its inverse \psi(g) = \epsilon_Y \circ F g. Adjoint functors exhibit key preservation properties: left adjoints preserve colimits, while right adjoints preserve limits. Specifically, if F \dashv G, then F preserves all colimits that exist in \mathcal{C}, and G preserves all limits that exist in \mathcal{D}. A classic example is the free group functor F: \mathbf{Set} \to \mathbf{Grp}, which sends a set to the free group on that set, left adjoint to the forgetful functor U: \mathbf{Grp} \to \mathbf{Set} that maps a group to its underlying set; the unit embeds generators into the free group, and the counit projects onto the identity. Another is the tensor product functor -\otimes_R M: R\text{-}\mathbf{Mod} \to R\text{-}\mathbf{Mod} for a fixed right R-module M, which is left adjoint to the Hom functor \mathrm{Hom}_R(M, -): R\text{-}\mathbf{Mod} \to R\text{-}\mathbf{Mod}, with the isomorphism \mathrm{Hom}_R(N \otimes_R M, P) \cong \mathrm{Hom}_R(N, \mathrm{Hom}_R(M, P)) natural in N and P. In topology, the fundamental groupoid functor \Pi_1: \mathbf{Top} \to \mathbf{Gpd}, sending a space to its fundamental groupoid (with points as objects and homotopy classes of paths as morphisms), is left adjoint to the symmetric nerve functor \mathrm{N}: \mathbf{Gpd} \to \mathbf{Top}, preserving homotopy colimits. The concept of adjoint functors was introduced by Daniel M. Kan in 1958, motivated by applications in homotopy theory.

Monads and Adjunctions

In category theory, given an adjunction F \dashv G where F: \mathcal{C} \to \mathcal{D} is left adjoint to G: \mathcal{D} \to \mathcal{C}, the composite endofunctor T = GF: \mathcal{D} \to \mathcal{D} forms a monad on \mathcal{D}, equipped with a unit natural transformation \eta: \mathrm{Id}_\mathcal{D} \to T (the unit of the adjunction) and a multiplication natural transformation \mu: T^2 \to T (defined by \mu = G \epsilon F, where \epsilon: FG \to \mathrm{Id}_\mathcal{C} is the counit). The associated Kleisli category \mathcal{D}_T has the same objects as \mathcal{D}, with morphisms from X to Y given by \mathcal{D}(TX, Y), composed via the monad structure. Every monad arises from an adjunction up to isomorphism, specifically via the free-forgetful adjunction between the Eilenberg-Moore category of T-algebras and the base category \mathcal{D}. Beck's monadicity theorem provides necessary and sufficient conditions for a functor U: \mathcal{E} \to \mathcal{D} to be monadic, meaning it has a left adjoint and \mathcal{E} is equivalent to the Eilenberg-Moore category of the induced monad on \mathcal{D}; these conditions require U to reflect isomorphisms and admit a U-split simplicial resolution for every object in \mathcal{E}. The powerset monad on the category of sets \mathbf{Set} arises from the free-forgetful adjunction between complete join-semilattices and \mathbf{Set}, where the monad T(X) = \mathcal{P}(X) (the powerset) has unit mapping x \mapsto \{x\} and multiplication taking unions of sets. In programming, the list monad models non-deterministic computations, with T(X) as the type of finite lists of elements from X, unit as singleton lists, and multiplication by concatenation; analogous structures include the (environment-dependent computations, T(X) = E \to X), (logging outputs, T(X) = X \times M), and (mutable state, T(X) = S \to (X, S)), all arising from suitable adjunctions. Monads find key applications in functional programming, where they encapsulate side effects in pure languages like ; the do-notation desugars to Kleisli composition, enabling sequenced computations such as I/O or state management without explicit monad transformers in basic cases. In algebra, monads underpin descent theory in algebraic geometry, where effective descent data for schemes or stacks corresponds to monadic functors via , allowing global objects to be reconstructed from local data under faithful flat morphisms. Dually, comonads arise from the composite FG of an adjunction F \dashv G, providing a counit and comultiplication on \mathcal{C}, and serve as the categorical dual of monads in contexts like co-algebras or costate programming.

Other Mathematical Contexts

Adjoint Representation

In the context of Lie theory, a Lie algebra \mathfrak{g} over \mathbb{R} or \mathbb{C} is a vector space endowed with a skew-symmetric bilinear Lie bracket [X, Y] satisfying the Jacobi identity [X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y]] = 0. This bracket measures the non-commutativity of elements and encodes infinitesimal symmetries of . The associated G has Lie algebra \mathfrak{g} = \mathfrak{X}(G)^L, the space of left-invariant vector fields on G. The adjoint representation of a Lie algebra \mathfrak{g} is the homomorphism \ad: \mathfrak{g} \to \End(\mathfrak{g}) defined by \ad_X(Y) = [X, Y] for X, Y \in \mathfrak{g}, where \End(\mathfrak{g}) denotes the space of linear endomorphisms of \mathfrak{g}. This equips \mathfrak{g} with a natural module structure over itself via commutation. For the corresponding Lie group G, the adjoint representation \Ad: G \to \Aut(\mathfrak{g}) is the action \Ad_g(X) = g X g^{-1} for g \in G and X \in \mathfrak{g}, realized through conjugation of vector fields or the exponential map; its differential at the identity yields the Lie algebra adjoint representation \ad. Key properties of the adjoint representation include its status as a Lie algebra homomorphism: [\ad_X, \ad_Y] = \ad_{[X,Y]}, which follows directly from the Jacobi identity. The kernel of \ad coincides with the center Z(\mathfrak{g}) = \{ Z \in \mathfrak{g} \mid [Z, Y] = 0 \ \forall Y \in \mathfrak{g} \}, measuring the abelian part of \mathfrak{g}. The image \ad(\mathfrak{g}) consists of the inner derivations of \mathfrak{g}, and for semisimple Lie algebras, \ad is faithful with image equal to the derived algebra [\mathfrak{g}, \mathfrak{g}] in the sense that the bracket generates the action. A fundamental invariant is the Killing form B(X, Y) = \trace(\ad_X \ad_Y), a symmetric bilinear form that is non-degenerate precisely when \mathfrak{g} is semisimple and invariant under automorphisms of \mathfrak{g}. A concrete example is the adjoint representation of \mathfrak{sl}(2, \mathbb{R}), the 3-dimensional Lie algebra of $2 \times 2 real trace-zero matrices with basis H = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, X = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, Y = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} satisfying [H, X] = 2X, [H, Y] = -2Y, [X, Y] = H. In this basis, the matrices of \ad_H, \ad_X, \ad_Y are \begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -2 \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 \\ -2 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & -1 & 0 \\ 0 & 0 & 0 \\ 2 & 0 & 0 \end{pmatrix} respectively, yielding an irreducible 3-dimensional representation isomorphic to the defining representation of \mathfrak{sl}(2, \mathbb{R}). The adjoint representation is central to the classification of semisimple Lie algebras, where for a Cartan subalgebra \mathfrak{h} \subset \mathfrak{g}, the eigenvalues of \ad_h (for h \in \mathfrak{h}) are the roots—linear functionals on \mathfrak{h} defining the root space decomposition \mathfrak{g} = \mathfrak{h} \oplus \bigoplus_{\alpha \in \Delta} \mathfrak{g}_\alpha, with [\mathfrak{h}, \mathfrak{g}_\alpha] = \alpha(\mathfrak{h}) \mathfrak{g}_\alpha. This structure enables the identification of and for the infinite families A_n, B_n, C_n, D_n and exceptional algebras G_2, F_4, E_6, E_7, E_8. Historically, the adjoint representation was pivotal in Élie Cartan's 1894 thesis classifying simple complex via invariant theory and the , and in Hermann Weyl's 1920s developments of highest weight theory and character formulas, which resolved the complete reducibility of finite-dimensional representations and integrated root systems into the framework.

Adjunction in Field Theory

In field theory, an adjunction, also known as a simple extension, refers to a field extension K/F where K = F(\alpha) for some element \alpha \in K. This construction adjoins \alpha to the base field F, forming the smallest field containing both F and \alpha, with elements expressed as polynomials in \alpha with coefficients in F. The minimal polynomial of \alpha over F is the monic irreducible polynomial m(x) \in F of least degree that has \alpha as a root. If \alpha is algebraic over F, the degree of this minimal polynomial equals the degree of the extension [K : F]. Every element of K satisfies a polynomial equation over F whose degree is at most this extension degree. Finite field extensions exhibit key properties related to adjunctions. Every finite extension of fields is a finite adjunction, meaning it can be obtained by successively adjoining finitely many elements. More specifically, the primitive element theorem states that a finite extension K/F is separable if and only if it is a simple extension, i.e., K = F(\theta) for some primitive element \theta \in K. This theorem holds over fields of characteristic zero or finite fields, ensuring that separable extensions simplify to single adjunctions. Examples illustrate these concepts clearly. The extension \mathbb{Q}(\sqrt{2})/\mathbb{Q} is a simple extension with minimal polynomial x^2 - 2 over \mathbb{Q}, having degree 2. Cyclotomic extensions, such as \mathbb{Q}(\zeta_n)/\mathbb{Q} where \zeta_n is a primitive nth root of unity, are simple extensions generated by \zeta_n, with the nth cyclotomic polynomial as the minimal polynomial. Separability distinguishes types of adjunctions, particularly in positive . A simple extension F(\alpha)/F is separable if the minimal polynomial of \alpha has distinct roots in an algebraic closure; otherwise, it is inseparable. Purely inseparable adjunctions arise in characteristic [p](/page/P′′) > 0, where the minimal polynomial is of the form x^{p^k} - c for some c \in F and k \geq 0, leading to extensions without primitive elements in the separable . For instance, in characteristic [p](/page/P′′), adjoining a pth root like F(t^{1/[p](/page/P′′)})/F(t) yields a purely inseparable extension of [p](/page/P′′).

References

  1. [1]
    [PDF] Adjoint and Its roles in Sciences, Engineering, and Mathematics
    Jul 4, 2023 · Abstract. This paper synergizes the roles of adjoint in various disciplines of mathematics, sciences, and en- gineering.
  2. [2]
    Adjoint -- from Wolfram MathWorld
    The word adjoint has a number of related meanings. In linear algebra, it refers to the conjugate transpose and is most commonly denoted A^(H).
  3. [3]
    [PDF] Determinant and the Adjugate
    corresponding matrix elements of A. The adjugate (or classical adjoint) of A, denoted. by adj A, is defined as the transpose of the matrix of cofactors, adjA = ...
  4. [4]
    [PDF] category theory and adjunctions - UChicago Math
    Before we can define an adjoint functor we must define isomorphisms and natural transformations. Definition 2.1. For a category C, an arrow f : A → B, is an ...
  5. [5]
    Chiò's and Dodgson's determinantal identities - ScienceDirect.com
    Aug 1, 2014 · In this paper we first give the historical origins of each identity, explicitly linking Dodgson's identity to a theorem of Jacobi, and Chiò's ...
  6. [6]
  7. [7]
    Hermitian Adjoint - BOOKS
    The Hermitian adjoint of a matrix is the same as its transpose except that along with switching row and column elements you also complex conjugate all the ...Missing: adjugate | Show results with:adjugate
  8. [8]
    Hermitian Operators
    A Hermitian operator is a linear operator that is equal to its adjoint, A = A † . An equivalent way to say this is that a Hermitian operator obeys ...
  9. [9]
    [PDF] Complex inner products (6.7 supplement) - UMD MATH
    Definition A Hermitian inner product on a complex vector space V is a function that, to each pair of vectors u and v in V , associates a complex number hu, vi ...Missing: adjoint | Show results with:adjoint
  10. [10]
    [PDF] Bounded Linear Operators on a Hilbert Space - UC Davis Math
    We also prove the Riesz representation theorem, which characterizes the bounded linear functionals on a Hilbert space, and discuss weak convergence in Hilbert ...
  11. [11]
    [PDF] functional analysis lecture notes: adjoints in hilbert spaces
    CHRISTOPHER HEIL. 1. Adjoints in Hilbert Spaces. Recall that the dot product on Rn is given by x · y = xTy, while the dot product on Cn is x · y = xT ¯y. ...Missing: mathematics | Show results with:mathematics
  12. [12]
    [PDF] lecture 28: adjoints and normal operators
    By the Riesz representation theorem, we have a conjugate linear map V ∗ → V that associates to each linear functional its Riesz vector. Thus we let τ∗ = R◦t.
  13. [13]
    [PDF] ADJOINT OPERATORS Consider a Hilbert space X over a field F ...
    By Riesz' representation theorem it follows that there exists a unique element y∗ ∈ X such that ϕ(x) = hT x, yi = hx, y∗i for all x ∈ X. We thus define T∗y := ...<|separator|>
  14. [14]
    [PDF] Operator theory on Hilbert spaces
    In this chapter we define the notions of unbounded operators, their adjoint, their resol- vent and their spectrum. Perturbation theory will also be considered.
  15. [15]
    [PDF] Properties of the Fourier transform
    The Fourier transform is an isometry of L2(Rn). That is,. kfkL2(Rn x ) ... Applying this definition of the adjoint to F, take any two f,g ∈ L2. Then.
  16. [16]
    [PDF] Chapter 9: The Spectrum of Bounded Linear Operators
    In this section, we analyze the spectrum of a compact, self-adjoint operator. The spectrum consists entirely of eigenvalues, with the possible exception of zero ...
  17. [17]
    Self-adjoint operator - Encyclopedia of Mathematics
    Jun 6, 2020 · The spectrum of a self-adjoint operator is non-empty and lies on the real line. The quadratic form K(A)=⟨Ax,x⟩ generated by a self-adjoint ...
  18. [18]
    [PDF] Mathematical Quantum Mechanics with Applications
    2.3 Examples of Self-Adjoint Operators and Self-Adjointness Criteria. In this section, we give several basic examples of self-adjoint operators that play an.
  19. [19]
    Die Eindeutigkeit der Schrödingerschen Operatoren
    About this article. Cite this article. v. Neumann, J. Die Eindeutigkeit der Schrödingerschen Operatoren. Math. Ann. 104, 570–578 (1931). https://doi.org ...
  20. [20]
    [PDF] maclane-categories.pdf - MIT Mathematics
    ... Mac Lane. Categories for the. Working Mathematician. Second Edition. Springer. Page 4. Saunders Mac Lane. Professor Emeritus. Department of Mathematics.
  21. [21]
    [math/0009004] Higher fundamental functors for simplicial sets - arXiv
    Sep 18, 2001 · As a crucial advantage, the fundamental groupoid functor !Smp --> Gpd is left adjoint to a natural functor Gpd --> !Smp, the symmetric nerve ...
  22. [22]
    [PDF] Monads for functional programming - The University of Edinburgh
    Abstract. The use of monads to structure functional programs is de- scribed. Monads provide a convenient framework for simulating effects.
  23. [23]
    [PDF] categorical monadicity and descent
    Dec 1, 2016 · Descent theory plays an important role in algebraic geometry, as well as in the plethora of fields which draw upon its technology. Motivated.
  24. [24]
    None
    ### Summary of Adjoint Representation, Cartan, and Weyl's Contributions to Lie Algebras and Root Systems
  25. [25]
    [PDF] Topics in Representation Theory: The Adjoint Representation 1 The ...
    So associated to Ad(G), the adjoint representation of the Lie group G on g, taking the derivative we have ad(g), a Lie algebra representation of g on itself.
  26. [26]
    5.2 Representations of Lie algebras and the adjoint representation
    A representation of a Lie algebra is simply a mapping that represents the elements of g g as matrices, with the bracket [⋅,⋅] [ ⋅ , ⋅ ] realized as commutator ...
  27. [27]
    The Adjoint Representation - BOOKS
    5.1 The Adjoint Representation​​ 🔗 Any Lie algebra acts on itself via commutators. This action is linear, so we can represent it using matrices. 🔗 Consider first ...
  28. [28]
    5.3 The adjoint representation
    The first formula, ad X ⁡ ( Y ) = [ X , Y ] , could have been used to define the adjoint representation for any Lie algebra, without reference to Lie groups.Missing: properties | Show results with:properties
  29. [29]
    [PDF] Representation Theory
    The adjoint representation of a Lie group is a measure of the non-commutativity of the group. Definition. An automorphism of a Lie group G is a map φ : G → G ...
  30. [30]
    [PDF] Lie Groups and Algebras 1 Intro 2 The Adjoint Representation and ...
    Jan 6, 2016 · 2 The Adjoint Representation and the Killing Form. Definition 2.1. A Lie algebra of dimension d is specified by a set of d generators Ti closed.
  31. [31]
    [PDF] Representations of sl(2, C)
    Jan 13, 2021 · Another example: The adjoint representation. The adjoint representation. The adjoint map ad : g → End(g) defined by ad(x)(z) := [x,z] is a.
  32. [32]
    [PDF] representations of semisimple lie algebras - UChicago Math
    Aug 26, 2011 · We develop and utilize various tools, in- cluding the adjoint representation, the Killing form, root space decomposition, and the Weyl group to ...
  33. [33]
    field adjunction - PlanetMath
    Feb 21, 2015 · Field adjunction is obtaining a field K(α) from K by adjoining α, or K(S) by adjoining a set S to K.
  34. [34]
    [PDF] 29 Extension Fields - UCI Mathematics
    This is an example of a simple extension, where we adjoin a single element to a given field and use the field operations to produce as many new elements as ...
  35. [35]
    [PDF] Lecture 6 - Math 5111 (Algebra 1)
    A simple extension F(α)/F is algebraic if and only if it has finite degree. Furthermore, if [F(α) : F] = n, then every element in F(α) satisfies a nonzero ...
  36. [36]
    [PDF] The Primitive Element Theorem.
    The Primitive Element Theorem. Assume that F and K are subfields of C and that K/F is a finite extension. Then K = F(θ) for some element θ in K.
  37. [37]
    [PDF] Mathematics 6310 The Primitive Element Theorem Ken Brown ...
    Given a field extension K/F, an element α ∈ K is said to be separable over F if it is algebraic over F and its minimal polynomial over F is separable. Recall ...
  38. [38]
    [PDF] primitive element theorem and normal basis theorem - OSU Math
    For a field extension L/K, a primitive element is any algebraic α ∈ L such that L = K(α). A necessary condition for the existence of such an element is that ...
  39. [39]
    [PDF] cyclotomic extensions - keith conrad
    For a field K, an extension of the form K(ζ), where ζ is a root of unity, is called a cyclotomic extension of K. The term cyclotomic means “circle-dividing,” ...
  40. [40]
    [PDF] Lecture 9 - Math 5111 (Algebra 1)
    The extension K/F is purely inseparable if and only if the minimal polynomial of each α ∈ K over F is of the form mα(x) = xpk − d for some k ≥ 0 and some d ∈ F ...
  41. [41]
    [PDF] Purely inseparable field extensions - Cornell Mathematics
    May 21, 2013 · Basic Definitions. Throughout, k be a field of characteristic p > 0. A finite extension K/k of fields is purely inseparable if for every α.
  42. [42]
    [PDF] Math 210B. Inseparable extensions
    We claim that E is the unique field strictly between L and k, so L/k cannot be expressed as a tower of a separable extension on top of a purely inseparable one!