In functional analysis, a self-adjoint operator on a Hilbert space is a densely defined linear operator that equals its own adjoint, meaning it satisfies H = H^* with identical domains D(H) = D(H^*).[1] This condition ensures that for all vectors f, g in the domain, the inner product obeys \langle Hf, g \rangle = \langle f, Hg \rangle.[2] Self-adjoint operators are closed and form a fundamental class of operators in operator theory, generalizing Hermitian matrices in finite dimensions where A = A^* implies the matrix is equal to its conjugate transpose.[2]Key properties of self-adjoint operators include the reality of their spectrum: all eigenvalues are real numbers.[3] Eigenvectors corresponding to distinct eigenvalues are orthogonal, and the operator admits a spectral decomposition via the spectral theorem.[3] For non-real complex numbers c with imaginary part \mu \neq 0, the resolvent (cI - H)^{-1} exists, is bounded, and satisfies \|(cI - H)^{-1}\| \leq 1/|\mu|, highlighting the operator's stability away from the real axis.[1] Symmetric operators, which satisfy the inner product condition but may have larger adjoint domains, can be extended to self-adjoint operators under certain conditions, such as essential self-adjointness.[1]Self-adjoint operators are central to quantum mechanics, where physical observables like position, momentum, and energy are represented by such operators on the Hilbert space of quantum states.[4] This representation guarantees real eigenvalues, interpretable as possible measurement outcomes, and non-commuting operators represent incompatible observables that cannot be simultaneously diagonalized in a common orthonormal basis.[4] Examples include the momentum operator -i\hbar \frac{d}{dx} on L^2(\mathbb{R}), which is essentially self-adjoint, and the Hamiltonian for the Schrödinger equation, which often requires self-adjoint extensions depending on the potential and boundary conditions, to ensure well-defined dynamics.[5] Their role extends to solving differential equations, spectral theory, and applications in partial differential equations, such as the Laplacian on bounded domains with appropriate boundary conditions.[3]
Basics
Definition
In the context of functional analysis, an inner product space over the complex numbers is a vector space equipped with a sesquilinear form ⟨⋅,⋅⟩ that is linear in the first argument, conjugate-linear in the second, positive definite (⟨x,x⟩ > 0 for x ≠ 0), and conjugate symmetric (⟨x,y⟩ = \overline{⟨y,x⟩}).[6] Such spaces provide the foundational structure for defining adjoints and self-adjointness, often completed to Hilbert spaces for dealing with unbounded operators.[1]For a linear operator A defined on a dense subspace D(A) of a Hilbert space H, the adjoint A^* is the operator satisfying ⟨A x, y⟩ = ⟨x, A^* y⟩ for all x ∈ D(A) and y ∈ D(A^), where D(A^) consists of those y ∈ H for which the right-hand side is bounded.[1] An operator A is self-adjoint if A = A^, meaning D(A) = D(A^) and A x = A^* x for all x ∈ D(A).[1] This equality ensures that A preserves the inner product structure in a symmetric manner.Self-adjoint operators are always symmetric, satisfying ⟨A x, y⟩ = ⟨x, A y⟩ for all x, y ∈ D(A), but the converse does not hold in infinite-dimensional spaces, where symmetric operators may have proper extensions to self-adjoint ones.[1] In finite dimensions, however, symmetry and self-adjointness coincide for bounded operators represented by Hermitian matrices.[2]
Adjoint operator
In an inner product space equipped with an inner product \langle \cdot, \cdot \rangle, the adjoint A^* of a linear operator A is defined as the unique operator satisfying \langle Ax, y \rangle = \langle x, A^* y \rangle for all x, y in the appropriate domains.[2] This definition captures the formal duality between A and A^* with respect to the inner product, preserving the sesquilinear structure.[7]On Hilbert spaces, the existence of the adjoint depends on the operator's boundedness and domain properties. For bounded (continuous) linear operators defined on the entire space, the adjoint exists and is also bounded, uniquely determined by the Riesz representation theorem applied to the functional y \mapsto \langle Ax, y \rangle.[2] For unbounded operators, the adjoint exists provided the domain of A is dense in the Hilbert space; in this case, the domain of A^* consists of all y such that the map x \mapsto \langle Ax, y \rangle is continuous with respect to the inner product, and A^* may be unbounded with a potentially larger domain.[8]The adjoint operator satisfies several fundamental properties that mirror algebraic structures. The double adjoint recovers the original operator: (A^*)^* = A, under the assumption that A is densely defined and closed.[2] For composition, if A and B are linear operators such that the product AB is densely defined, then (AB)^* = B^* A^*.[2] Additionally, for a scalar c \in \mathbb{C}, (cA)^* = \bar{c} A^*, reflecting the antilinear nature of the adjoint with respect to complex conjugation.[2]In the finite-dimensional setting, where the Hilbert space is \mathbb{C}^n with the standard inner product, the adjoint of a matrix A is explicitly given by the conjugate transpose A^* = \overline{A}^T.[2] Self-adjoint operators arise as the special case where A = A^*.[9]
Examples
Finite-dimensional examples
In finite-dimensional Hilbert spaces, self-adjoint operators on \mathbb{C}^n with the standard inner product are precisely those represented by Hermitian matrices, where a matrix A satisfies A = A^\dagger and A^\dagger denotes the conjugate transpose.[10] This equivalence holds because the matrix entries ensure \langle Ax, y \rangle = \langle x, Ay \rangle for all x, y \in \mathbb{C}^n.[11]Over the real numbers, self-adjoint operators on \mathbb{R}^n correspond to real symmetric matrices, satisfying A = A^T where A^T is the transpose, as the absence of complex conjugation simplifies the adjoint to the transpose alone.[10]Diagonal matrices with real entries provide a basic illustration of self-adjointness in both settings. For instance, consider D = \begin{pmatrix} 3 & 0 \\ 0 & -2 \end{pmatrix}; its conjugate transpose is D^\dagger = \begin{pmatrix} 3 & 0 \\ 0 & -2 \end{pmatrix}, matching D since the diagonal elements are real and off-diagonal entries are zero.[12]The Pauli matrices offer another concrete example in the context of quantum mechanics, where they represent self-adjoint operators on \mathbb{C}^2 for spin-1/2 systems. These are defined as\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad
\sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad
\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}.Direct verification confirms self-adjointness: \sigma_x^\dagger = \sigma_x as it is real and symmetric; \sigma_y^\dagger = \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix}^T = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} = \sigma_y; and \sigma_z^\dagger = \sigma_z due to real diagonal entries.[13]
Infinite-dimensional examples
In infinite-dimensional Hilbert spaces, self-adjoint operators frequently appear as unbounded operators, necessitating precise specification of their domains to ensure self-adjointness. Unlike finite-dimensional cases, where self-adjointness equates to symmetry of matrices, infinite-dimensional examples often involve differential or multiplication operators on spaces like L^2 over intervals or the real line, with domains chosen to satisfy the self-adjoint condition \langle Af, g \rangle = \langle f, Ag \rangle for all f, g in the domain. These operators play crucial roles in quantum mechanics and spectral theory, where their self-adjointness guarantees real spectra and unitary evolution.[5]A fundamental example of a bounded self-adjoint operator is the multiplication operator on L^2(X, \mu), where (X, \mathcal{F}, \mu) is a \sigma-finite measure space. For a real-valued essentially bounded measurable function m \in L^\infty(X, \mu), the operator M_m is defined by (M_m f)(x) = m(x) f(x) for f \in L^2(X, \mu), with domain all of L^2(X, \mu). This operator is self-adjoint because its adjoint is the multiplication by the complex conjugate \overline{m}, which coincides with M_m since m is real-valued. Such operators are unitarily equivalent to diagonal operators via the spectral theorem and are essential for representing observables in quantum systems.[1][14]Unbounded self-adjoint multiplication operators also arise, exemplified by the position operator in quantum mechanics on L^2(\mathbb{R}, dx). Defined by (Q f)(x) = x f(x) with domain D(Q) = \{ f \in L^2(\mathbb{R}) \mid x f(x) \in L^2(\mathbb{R}) \}, this operator is self-adjoint because multiplication by the real function x satisfies the required symmetry on this maximal domain. The position operator corresponds to the classical position observable and, together with the momentum operator, satisfies the canonical commutation relations fundamental to quantum theory.[5][15]Differential operators provide another key class of self-adjoint examples, such as the negative second derivative A = -\frac{d^2}{dx^2} on L^2[0,1] equipped with appropriate boundary conditions. For Dirichlet boundary conditions, the domain is D(A) = \{ f \in H^2[0,1] \mid f(0) = f(1) = 0 \}, where H^2[0,1] is the Sobolev space of functions with square-integrable second derivatives. This choice ensures self-adjointness by making the boundary terms in the integration-by-parts formula vanish, yielding \langle A f, g \rangle = \langle f, A g \rangle. This operator, akin to a one-dimensional Laplacian, models vibrations of strings fixed at endpoints and has a discrete spectrum of positive eigenvalues.[16][17]In quantum mechanics, the momentum operator P = -i \hbar \frac{d}{dx} on L^2(\mathbb{R}, dx) is self-adjoint when defined on the Sobolev space D(P) = H^1(\mathbb{R}) = \{ f \in L^2(\mathbb{R}) \mid f' \in L^2(\mathbb{R}) \}, with \hbar the reduced Planck's constant. This domain renders P self-adjoint, as the adjoint coincides with P itself, and it is essentially self-adjoint on the dense subspace of smooth compactly supported functions. The operator represents the classical momentum and generates translations in position space, with a continuous spectrum covering all real numbers.[18][19]
Criteria
Finite-dimensional criteria
In finite-dimensional complex Hilbert spaces, a linear operator is self-adjoint if and only if its matrix representation with respect to an orthonormal basis equals its conjugate transpose, denoted A = A^\dagger, where A^\dagger = \overline{A}^T is the conjugate transpose of A.[11][20] This condition ensures that the operator satisfies \langle Av, w \rangle = \langle v, Aw \rangle for all vectors v, w in the space.[21]For real finite-dimensional Hilbert spaces, the self-adjoint criterion simplifies to the matrix being symmetric, meaning A = A^T, where A^T is the transpose of A.[11][20] This follows because the conjugate transpose reduces to the ordinary transpose over the reals, without complex conjugation.[21]To verify self-adjointness computationally, examine the matrix entries: a matrix A = (a_{ij}) is self-adjoint if and only if a_{ij} = \overline{a_{ji}} for all indices i, j.[11][20] The diagonal elements must be real, as a_{ii} = \overline{a_{ii}}, while off-diagonal elements come in conjugate pairs.[21]A key implication of self-adjointness in finite dimensions is that all eigenvalues of the matrix are real numbers, though the proof of this property is addressed elsewhere.[11][20] This real spectrum distinguishes self-adjoint matrices, often called Hermitian matrices, from more general ones.[21]
Infinite-dimensional criteria
In infinite-dimensional Hilbert spaces, determining self-adjointness of densely defined operators requires attention to domain issues, as operators may be unbounded. A linear operator A: D(A) \to \mathcal{H} on a Hilbert space \mathcal{H} is symmetric if it satisfies \langle Ax, y \rangle = \langle x, Ay \rangle for all x, y \in D(A), where D(A) is a dense subspace of \mathcal{H}. Symmetric operators are always closable, and their closures remain symmetric.An operator A is self-adjoint if it is symmetric and equals its adjoint A^*, meaning D(A) = D(A^*) and Ax = A^* x for all x \in D(A). The adjoint A^* is always closed, and for symmetric A, A \subset A^* holds, but equality of domains is the key condition distinguishing self-adjointness from mere symmetry.For a closed symmetric operator A, self-adjointness can be characterized using deficiency indices n_+ = \dim \ker(A^* - iI) and n_- = \dim \ker(A^* + iI), which measure the codimensions of the ranges of A \pm iI. The operator A is self-adjoint if and only if these indices are both zero, i.e., (n_+, n_-) = (0,0). If n_+ = n_- > 0, self-adjoint extensions exist and are parametrized by unitary operators between the corresponding deficiency subspaces.For semi-bounded symmetric operators (those with \langle Ax, x \rangle \geq c \|x\|^2 for some c \in \mathbb{R} and all x \in D(A)), canonical self-adjoint extensions provide essential tools for achieving self-adjointness while preserving semiboundedness. The Friedrichs extension is constructed by completing D(A) in the norm \|x\|_A = \sqrt{\|x\|^2 + \langle Ax, x \rangle - 2c \|x\|^2} to form the domain of the extension, yielding a self-adjoint operator whose quadratic form is the closure of that of A. Dually, the Krein-von Neumann extension (for positive operators) is the minimal self-adjoint extension in the ordering of quadratic forms, obtained as the restriction of the Friedrichs extension of the inverse on the orthogonal complement of the kernel. These extensions ensure the existence of at least one self-adjoint realization for semi-bounded operators with equal deficiency indices.
Properties
General properties
A self-adjoint operator on a Hilbert space is normal, as it satisfies A A^* = A^2 = (A^*) A = A^* A.[2]For any eigenvalue \lambda of a self-adjoint operator A, \lambda is real. To see this, suppose A v = \lambda v for some nonzero v \in H. Then \lambda \|v\|^2 = \langle A v, v \rangle = \langle v, A^* v \rangle = \langle v, A v \rangle = \langle v, \lambda v \rangle = \bar{\lambda} \|v\|^2, implying \lambda = \bar{\lambda}.[22]Moreover, if v_1 and v_2 are eigenvectors of A corresponding to distinct eigenvalues \lambda_1 \neq \lambda_2, then v_1 and v_2 are orthogonal. Indeed, \langle A v_1, v_2 \rangle = \lambda_1 \langle v_1, v_2 \rangle and \langle A v_1, v_2 \rangle = \langle v_1, A v_2 \rangle = \bar{\lambda_2} \langle v_1, v_2 \rangle = \lambda_2 \langle v_1, v_2 \rangle since \lambda_2 is real, so (\lambda_1 - \lambda_2) \langle v_1, v_2 \rangle = 0, which forces \langle v_1, v_2 \rangle = 0.[9]The operator norm of a self-adjoint operator A equals the supremum of the absolute values of its quadratic forms on the unit sphere: \|A\| = \sup_{\|x\|=1} |\langle A x, x \rangle|. This follows from the fact that \langle A x, x \rangle is real for all x, combined with the Cauchy-Schwarz inequality applied to the sesquilinear form.[2]Self-adjoint operators on Hilbert spaces are closed, meaning their graphs are closed subsets of H \times H. This holds because the adjoint A^* of any densely defined operator is closed, and self-adjointness requires A = A^* with equal domains.[23]
Spectral properties
The spectrum of a self-adjoint operator A on a Hilbert space is a nonempty closed subset of the real numbers \mathbb{R}.[24] This follows from the fact that the resolvent set consists of all nonreal complex numbers, ensuring that no nonreal points lie in the spectrum, and the closedness arises from the continuity of the resolvent function.[25]For a bounded self-adjoint operator, the spectral radius r(A), defined as the supremum of |\lambda| over \lambda \in \sigma(A), equals the operator norm \|A\|.[24] This equality holds because the norm is attained as the supremum of |\langle Ax, x \rangle| for unit vectors x, which aligns with the maximum absolute eigenvalue or spectral value.[26]The spectral theorem for bounded self-adjoint operators states that A can be represented as A = \int_{\sigma(A)} \lambda \, dE(\lambda), where E is a unique spectral measure (resolution of the identity) on the Borel subsets of \mathbb{R} such that E(\mathbb{R}) = I and the integral is understood in the strong operator topology.[27] This decomposition allows functional calculus, where functions of A are defined via f(A) = \int f(\lambda) \, dE(\lambda).[27]For unbounded self-adjoint operators, the spectral theorem extends in a similar integral form: A = \int_{\mathbb{R}} \lambda \, dE(\lambda), where the domain of A consists of vectors x for which \int_{\mathbb{R}} |\lambda|^2 \, d\langle E(\lambda)x, x \rangle < \infty, and the spectral measure E satisfies the same projection-valued properties.[28] This representation is constructed using the resolvent, which generates the spectral measure via the Stone-Weierstrass theorem or Fourier transforms.[29]The spectrum \sigma(A) of a self-adjoint operator decomposes into the point spectrum \sigma_p(A), consisting of eigenvalues (where A - \lambda I is not injective), and the continuous spectrum \sigma_c(A), comprising approximate eigenvalues that are not true eigenvalues, such that \sigma(A) = \sigma_p(A) \cup \sigma_c(A) with empty residual spectrum.[24] For self-adjoint operators, the point spectrum is at most countable and can embed within the continuous spectrum, as seen in Schrödinger operators where bound states (point spectrum) may lie below a continuous band.[25] The continuous spectrum often corresponds to scattering states or essential spectrum components.[30]
Advanced contexts
In *-algebras
A *-algebra is an associative algebra over the complex numbers equipped with an involution operation *, which is an antilinear anti-automorphism satisfying (ab)^* = b^* a^*, (a^*)^* = a, and (\lambda a)^* = \bar{\lambda} a^* for all elements a, b in the algebra and scalars \lambda \in \mathbb{C}.[31]An element a in a *-algebra is self-adjoint if it equals its own involution, i.e., a = a^*.[31] Self-adjoint elements form a real vector space under addition and scalar multiplication by real numbers, as the involution fixes them pointwise.[31]In any *-representation \pi of the *-algebra on a Hilbert space, the image \pi(a) of a self-adjoint element a is a self-adjoint operator, and its spectrum lies in the real numbers.[31] This property ensures that the spectrum remains real across faithful representations, reflecting the algebraic structure without reference to norms.[31]The unital *-subalgebra generated by a single self-adjoint element a consists of all polynomials \sum c_k a^k with complex coefficients c_k, closed under the involution since (p(a))^* = \bar{p}(a) for the conjugate polynomial \bar{p}, and is commutative because powers of a commute.[31]Self-adjoint elements play a foundational role in the GNS construction for *-algebras, where a positive linear functional \phi on the algebra satisfies \phi(a^* a) \geq 0 for all a and takes real values on self-adjoint elements; this yields a -representation on a pre-Hilbert space, serving as a prerequisite for completing to C-algebras.
In C*-algebras
A C*-algebra is a complexBanach *-algebra A that is complete with respect to a norm \lVert \cdot \rVert satisfying the C*-identity \lVert a^* a \rVert = \lVert a \rVert^2 for all a \in A.[32] This norm condition ensures that the involution * is compatible with the topology, distinguishing C*-algebras from general Banach -algebras. Self-adjoint elements in a C-algebra are those a \in A satisfying a = a^*, and they form a real subspace closed under the norm.[32]The C*-subalgebra generated by a self-adjoint element a (and the unit, if A is unital) is commutative and -isomorphic to C(\sigma(a)), the C-algebra of continuous complex-valued functions on the spectrum \sigma(a) \subseteq \mathbb{R}.[33] This isomorphism arises from the functional calculus for normal elements, where polynomials in a and a^* (which coincide for self-adjoints) densely approximate continuous functions on the spectrum. By the commutative version of the Gelfand-Naimark theorem, every commutative C*-algebra is *-isomorphic to C_0(X) for a locally compact Hausdorff space X, and self-adjoint elements map to continuous real-valued functions on X.[34]An element a \in A is positive if a = b^* b for some b \in A; for self-adjoint a, this is equivalent to \sigma(a) \subseteq [0, \infty).[35] Positive elements induce a partial order on the self-adjoint elements via a \leq b if b - a \geq 0. C*-algebras admit approximate units, nets (u_\lambda) such that \lVert a u_\lambda - a \rVert \to 0 and \lVert u_\lambda a - a \rVert \to 0 for all a \in A, and such units can be chosen from the positive self-adjoint elements of norm at most 1.[36]Every closed two-sided ideal I in a C*-algebra is self-adjoint, meaning a \in I implies a^* \in I, and inherits the C*-algebra structure with the restricted norm.[36] Moreover, I is the closed ideal generated by its positive elements, as every element in I can be approximated by finite sums of terms of the form a^* h b with h \in I positive and a, b \in A, reflecting the role of self-adjoints in spanning the algebra.[36]