Fact-checked by Grok 2 weeks ago

Invariant subspace

In linear algebra, an invariant subspace of a linear transformation T: V \to V on a V is a W \subseteq V such that T(W) \subseteq W, meaning the transformation maps W into itself. This concept captures how T preserves the structure of certain subspaces, allowing for the decomposition of V into simpler components under T. Trivial examples include the zero subspace \{0\} and the entire V, while non-trivial invariant subspaces, such as eigenspaces, reveal the properties of T. In finite-dimensional spaces over algebraically closed fields like the complex numbers, every linear operator on a space of dimension at least 2 admits non-trivial invariant subspaces, and the full collection of these subspaces determines the Jordan canonical form of the operator's matrix representation. Eigenspaces corresponding to eigenvalues \lambda are invariant, as are generalized eigenspaces and kernels of powers of (T - \lambda I), enabling the block-diagonal decomposition that simplifies computations like powers of matrices or solvability of linear systems. These subspaces facilitate the study of nilpotency, diagonalizability, and minimal polynomials, forming the backbone of matrix theory and applications in differential equations and Markov chains. In the broader context of functional analysis, invariant subspaces extend to bounded linear operators on infinite-dimensional Banach or Hilbert spaces, where they play a crucial role in operator theory and spectral analysis. For compact operators, non-trivial closed invariant subspaces always exist, but the general invariant subspace problem—asking whether every bounded operator on a separable complex Hilbert space has a non-trivial closed invariant subspace—remains unsolved, despite counterexamples in Banach space settings by Enflo (1987) and Read (1985). This problem, posed by von Neumann in the 1930s, underscores deep connections to complex analysis, as seen in Beurling's 1949 characterization of invariant subspaces for multiplication operators on Hardy spaces. Advances, such as Lomonosov's 1973 theorem guaranteeing invariant subspaces for operators commuting with non-zero compacts, highlight ongoing research into operator classifications and universal models.

Fundamentals

Definition and Basic Properties

In linear algebra, the study of invariant subspaces begins with the foundational concepts of vector spaces and linear operators. Let V be a vector space over a field \mathbb{F}, and let T: V \to V be a linear operator. A subspace W \subseteq V is a subset that is closed under addition and scalar multiplication by elements of \mathbb{F}, containing the zero vector. A subspace W of V is said to be invariant under T if T(W) \subseteq W, meaning that for every w \in W, T(w) \in W. This condition ensures that the action of T does not map elements of W outside of W. Equivalently, W is invariant under T if the restriction T|_W: W \to W, defined by T|_W(w) = T(w) for w \in W, is a well-defined linear operator on W. In terms of matrix representations, if \{v_1, \dots, v_k\} is a basis for W extended to a basis \{v_1, \dots, v_k, v_{k+1}, \dots, v_n\} for V, then the matrix of T with respect to this basis has a block upper triangular form: \begin{pmatrix} A & B \\ 0 & C \end{pmatrix}, where A is the k \times k matrix representing T|_W, the zero block reflects the invariance, and B, C account for the action on the complement. The invariance property preserves the subspace structure of W, as T(W) is itself a subspace contained within W, maintaining closure under addition and scalar multiplication. The collection of all invariant subspaces under T forms a lattice under inclusion, where the meet and join operations correspond to intersection and the span of the union, respectively; however, this lattice is not necessarily a Boolean algebra unless T has additional structure. Trivial invariant subspaces always exist: the zero subspace \{0\}, since T(0) = 0, and the entire space V, since T(V) \subseteq V. Nontrivial examples include the kernel \ker(T) = \{ v \in V \mid T(v) = 0 \}, as T(\ker(T)) = \{0\} \subseteq \ker(T), and the image \operatorname{im}(T) = \{ T(v) \mid v \in V \}, as T(\operatorname{im}(T)) \subseteq \operatorname{im}(T).

Finite vs. Infinite Dimensions

In finite-dimensional vector spaces over the complex numbers, every linear admits at least one nontrivial invariant subspace. This result stems from the fact that the of the always has a root in the complex numbers, yielding an eigenvector and thus a one-dimensional invariant subspace. More comprehensively, the canonical form decomposes the space into a of generalized eigenspaces, each of which is invariant under the . This finite-dimensional theorem traces back to the work of Camille in the 1870s, particularly his development of s for linear transformations. In infinite-dimensional settings, such as Banach or Hilbert spaces, no analogous general existence theorem holds for arbitrary bounded linear operators. Counterexamples exist in separable Banach spaces, where Per Enflo constructed a bounded operator without any nontrivial closed invariant subspaces, highlighting pathologies unique to infinite dimensions. For Hilbert spaces, the invariant subspace problem—whether every bounded operator has a nontrivial closed invariant subspace—remains unresolved, though specific classes like normal operators do possess them via the spectral theorem. A key distinction lies in the structural decompositions available: finite dimensions permit a complete into invariant generalized eigenspaces, enabling full classification of the operator up to similarity. In infinite dimensions, such decompositions require extra conditions; for instance, the for normal operators on Hilbert spaces yields a into invariant subspaces corresponding to Borel sets in the , but general operators may evade this without additional structure like , which guarantees eigenvalues and thus invariant subspaces. These differences underscore how finite-dimensional theory provides robust guarantees, while infinite-dimensional analysis often demands functional analytic tools to mitigate counterexamples and achieve partial resolutions.

Single Linear Operator

Examples of Invariant Subspaces

A prominent example of invariant subspaces arises with the rotation operator T on \mathbb{R}^2, defined by T(x, y) = (x \cos \theta - y \sin \theta, x \sin \theta + y \cos \theta) for some angle \theta. The one-dimensional subspaces, which are lines through the origin, are invariant under T only if \theta = 0^\circ or $180^\circ (modulo $360^\circ); in these cases, every such line is mapped to itself. For all other \theta, including irrational multiples of \pi, the only invariant subspaces are the trivial ones: \{0\} and \mathbb{R}^2 itself, as the operator has no real eigenvalues and rotates every nonzero vector out of its span. For a diagonalizable T on a finite-dimensional V over \mathbb{C}, the eigenspaces corresponding to distinct eigenvalues provide minimal nontrivial subspaces. Specifically, if \lambda is an eigenvalue with eigenvector v \neq 0, then the one-dimensional \{ \alpha v : \alpha \in \mathbb{C} \} is under T, since T(\alpha v) = \alpha \lambda v remains in the span. The of these eigenspaces decomposes V into components, highlighting how diagonalizability ensures a rich structure of invariant subspaces. Nilpotent operators offer another key construction, such as the differentiation operator D on the space \mathcal{P}_{n-1} of polynomials over \mathbb{R} (or \mathbb{C}) of degree less than n, where D(p) = p' and D^n = 0. This operator is cyclic, generated by the basis \{1, x, x^2, \dots, x^{n-1}\}, and its invariant subspaces are the cyclic subspaces \mathcal{P}_{k-1} = \operatorname{span}\{1, x, \dots, x^{k-1}\} for k = 1, \dots, n, each consisting of polynomials of degree less than k. These subspaces form a chain where D maps \mathcal{P}_{k-1} into \mathcal{P}_{k-2}. In the case of a Jordan block J of size n for eigenvalue \lambda, acting on a basis \{e_1, e_2, \dots, e_n\} where J e_1 = \lambda e_1 and J e_k = \lambda e_k + e_{k-1} for k > 1, the invariant subspaces form a flag \{0\} \subset \operatorname{span}\{e_1\} \subset \operatorname{span}\{e_1, e_2\} \subset \cdots \subset \operatorname{span}\{e_1, \dots, e_n\} = V. Each successive subspace in this chain is invariant, with J mapping it to the previous one, illustrating the canonical chain structure for non-diagonalizable operators. Geometrically, invariant subspaces represent directions or higher-dimensional "slices" preserved by the linear , in the sense that applying the to any in the yields another within the same , allowing the to act internally without escaping. As a non-example illustrating scarcity of invariants, consider the irrational on S^1, induced by rotation by an $2\pi \alpha where \alpha is ; this on L^2(S^1) has no nontrivial finite-dimensional invariant subspaces, as the dense orbits prevent rational (periodic) invariant directions beyond the full space or zero.

One-Dimensional Cases

In the context of a linear operator T on a finite-dimensional V, a one-dimensional W \subseteq V is under T W is an eigenspace corresponding to some eigenvalue \lambda \in \mathbb{F}, where \mathbb{F} is the underlying field. This means there exists a basis v \neq 0 for W such that T(v) = \lambda v, ensuring T(W) \subseteq W. Such s represent the minimal nontrivial invariants for T, as the action of T on W is simply by \lambda. The existence of one-dimensional invariant subspaces is tied to the spectrum of T. In finite dimensions over an algebraically closed field like \mathbb{C}, every nonzero operator T admits at least one eigenvalue, hence a one-dimensional invariant subspace, due to the fundamental theorem of algebra guaranteeing roots for the characteristic polynomial \det(T - \lambda I) = 0. Over \mathbb{R}, existence is not guaranteed if the characteristic polynomial has no real roots, though complex eigenvalues always exist in pairs. The dimension of the eigenspace \ker(T - \lambda I), termed the geometric multiplicity of \lambda, satisfies \dim(\ker(T - \lambda I)) \leq m_\lambda, where m_\lambda is the algebraic multiplicity of \lambda as a root of the . Equality holds if and only if the eigenspace achieves the full multiplicity, which is necessary for complete diagonalizability over the base field. To identify such a subspace explicitly for a matrix representation A of T, compute an eigenvalue \lambda and solve the equation (A - \lambda I)v = 0 for a nonzero vector v \in \mathbb{F}^n, which spans the one-dimensional eigenspace if the geometric multiplicity is 1. A notable special case arises with real matrices, where nonreal eigenvalues preclude one-dimensional real invariant subspaces, as any corresponding eigenvectors would have entries. For example, the \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} has eigenvalues i and -i, yielding no real eigenvectors and thus no one-dimensional real invariant subspaces.

Projections and Diagonalization

In linear algebra, invariant subspaces play a central role in decomposing a under the action of a T: V \to V. Suppose V = W \oplus U, where both W and U are invariant subspaces under T. Then, there exists a unique P: V \to W along U such that TP = PT, meaning the projection commutes with the operator. This property ensures that T preserves the decomposition: T(W) \subseteq W and T(U) \subseteq U, allowing V to be expressed as a of invariant components. Such decompositions facilitate the analysis of T by restricting it to each summand separately. For diagonalizable operators, the space V decomposes into a direct sum of eigenspaces, each of which is a one-dimensional invariant subspace. Specifically, if T is diagonalizable, then V = \bigoplus_{\lambda \in \sigma(T)} E_\lambda, where E_\lambda = \ker(T - \lambda I) is the eigenspace corresponding to eigenvalue \lambda, and each E_\lambda is T-invariant. In the case where T is normal (i.e., T^* T = T T^* on an ), this decomposition admits orthogonal projections P_\lambda onto E_\lambda, yielding the spectral decomposition T = \sum_{\lambda \in \sigma(T)} \lambda P_\lambda. These projections satisfy P_\lambda^2 = P_\lambda, \sum_{\lambda} P_\lambda = I, and P_\lambda P_\mu = 0 for \lambda \neq \mu, enabling a full diagonal representation of T in an orthonormal basis of eigenvectors. To compute such decompositions numerically, especially for large matrices, iterative algorithms exploit subspaces. Krylov subspace methods generate a sequence of subspaces K_k = \operatorname{span}\{v, Tv, T^2 v, \dots, T^{k-1} v\} for an initial vector v, which approximate dominant subspaces and facilitate partial by projecting T onto K_k and solving the resulting smaller eigenproblem. Similarly, the Schur triangulation algorithm computes a unitary Q^* A Q = T, where T is upper triangular, revealing a of nested subspaces corresponding to the eigenvalues on the diagonal. These approaches approximate the diagonal form by iteratively refining subspaces, though convergence depends on eigenvalue separation. However, not all operators admit a direct via subspaces. If T is not diagonalizable, its minimal has repeated factors, and the primary decomposition theorem yields a V = \bigoplus \ker(p_i(T)), where p_i are the distinct irreducible factors, but these generalized eigenspaces require the Jordan canonical form for full structure, using chains of generalized eigenvectors within each component.

Lattice of Invariant Subspaces

The collection of all subspaces of a linear operator T on a finite-dimensional V forms a under the partial order of set-theoretic \subseteq. The meet operation is the of subspaces, which preserves invariance since if U and W are T-, then T(U \cap W) \subseteq U \cap W. The join operation is the linear sum (or of the ) U + W, which is also T- because T(U + W) = T(U) + T(W) \subseteq U + W. In the finite-dimensional case, this , denoted \mathrm{Lat}(T), is modular, inheriting the from the full of subspaces of V, which satisfies the modular identity \dim(U + W) + \dim(U \cap W) = \dim U + \dim W for any subspaces U, W \subseteq V. The function \dim(\cdot) serves as a function on \mathrm{Lat}(T), providing an of the that measures the "size" of elements in a way compatible with the T, as T maps each subspace to itself. \mathrm{Lat}(T) is distributive T is cyclic, a equivalent to T having a cyclic (e.g., when its consists of a single block). In the general diagonalizable case, \mathrm{Lat}(T) decomposes as a of the lattices of the eigenspaces, which may not be distributive if any eigenspace has greater than one. The atoms of \mathrm{Lat}(T) are the minimal nontrivial (nonzero) invariant subspaces, which cover the zero subspace in the lattice order. For a primary operator (minimal polynomial a power of an irreducible), the covering relations in \mathrm{Lat}(T) increase the dimension by a fixed amount equal to the dimension of the kernel of the primary component. The height of the lattice, given by the length of a maximal chain from \{0\} to V, relates to the degree of the minimal polynomial of T: for a single primary component, the height equals the degree of the minimal polynomial. In general, \mathrm{Lat}(T) decomposes into a direct sum of lattices corresponding to the primary components of T, with the overall structure determined by the invariant factors or Jordan form. A example arises for a nilpotent operator T represented by a single block of size n, where the minimal has degree n. Here, \mathrm{Lat}(T) is a (totally ordered ) isomorphic to the \{0 < 1 < \cdots < n\}, consisting of the subspaces \mathrm{[span](/page/Span)}\{e_1, \dots, e_k\} for k = 0, \dots, n in the standard Jordan basis \{e_1, \dots, e_n\}, with height n. This structure reflects the cyclic nature and the index of nilpotency.

Multiple Linear Operators

Joint Invariant Subspaces

In linear algebra, a V of a W is called a joint invariant subspace for a family of linear operators \{T_i : i \in I\} on W if T_i(V) \subseteq V for every i \in I. This generalizes the notion of an invariant subspace for a single operator, which corresponds to the special case where the family consists of a single T. The collection of joint invariant subspaces for \{T_i\} forms a lattice under inclusion, with key properties inherited from those of single-operator invariants. In particular, the intersection of any family of joint invariant subspaces is itself joint invariant, as the intersection of subspaces invariant under each T_i remains closed under the action of every T_i. Finite direct sums of joint invariant subspaces are also joint invariant. The trivial subspaces \{0\} and W are always joint invariant. When the operators commute, i.e., [T_i, T_j] = T_i T_j - T_j T_i = 0 for all i, j \in I, joint invariant subspaces exhibit richer structure, particularly in finite-dimensional spaces over an like \mathbb{C}. If each T_i is diagonalizable, the family admits simultaneous : there exists a basis of W consisting of common eigenvectors, and the corresponding one-dimensional joint eigenspaces (spanned by simultaneous eigenvectors v with T_i v = \lambda_i v for eigenvalues \lambda_i) are joint invariant subspaces. More generally, commuting operators on finite-dimensional spaces can be simultaneously upper triangularized, yielding chains of joint invariant subspaces that reflect the joint generalized eigenspace decomposition. For noncommuting families, identifying nontrivial invariant subspaces is generally more difficult, as the lack of commutativity prevents simultaneous triangularization or in general. In such cases, the invariant subspaces may coincide with those invariant under individual operators, though constructing them often requires case-specific analysis or reduction to single-operator problems. A further considers invariance under the \mathcal{A} generated by \{T_i : i \in I\} and the identity operator, consisting of all finite linear combinations \sum a_k P_k where each P_k is a product of the T_i's. A subspace V is \mathcal{A}-invariant if A(V) \subseteq V for every A \in \mathcal{A}, which is equivalent to invariance under the T_i's since polynomials in the generators preserve the property. This perspective connects invariance to representations of noncommutative , where the lattice of invariant subspaces corresponds to submodules.

Examples for Commuting Operators

A fundamental example of commuting operators and their joint invariant subspaces arises when one operator is a polynomial in the other. Consider linear operators A and B = p(A) on a vector space V, where p is a polynomial with coefficients in the base field. Since powers of A commute with A, it follows that B commutes with A. Any A-invariant subspace W \subseteq V is also B-invariant, as B w = p(A) w lies in W for w \in W, given that W is closed under applications of A. In particular, the eigenspace E_\lambda = \{ v \in V \mid A v = \lambda v \} of A for eigenvalue \lambda satisfies B E_\lambda = p(\lambda) E_\lambda \subseteq E_\lambda, making E_\lambda a joint invariant subspace for the pair \{A, B\}. In , sets of commuting observables provide natural examples of joint invariant subspaces through their shared eigenspaces. Commuting operators on a can be simultaneously diagonalized, meaning there exists an of common eigenvectors, with the corresponding eigenspaces being joint invariant under the operators. For spin systems described using , consider a two-qubit where the operators \sigma_z^{(1)} = \sigma_z \otimes I and \sigma_z^{(2)} = I \otimes \sigma_z act on the first and second qubits, respectively. These operators commute, as [\sigma_z^{(1)}, \sigma_z^{(2)}] = 0, since they operate on disjoint subsystems. The joint eigenspaces are the one-dimensional subspaces spanned by the computational basis states |00\rangle, |01\rangle, |10\rangle, |11\rangle, each with joint eigenvalues (\pm 1, \pm 1), and each such subspace is invariant under both operators. In , irreducible representations illustrate minimal joint subspaces for commuting families derived from group algebras. For a group G on a V via a \rho: G \to \mathrm{GL}(V), the operators \rho(g) for g \in G generate an of linear operators on V. If G is abelian, these operators commute. A W \subseteq V is jointly invariant if \rho(g) W \subseteq W for all g \in G. The representation is irreducible if the only joint invariant subspaces are \{0\} and V, making V a minimal nontrivial joint invariant subspace for the commuting family \{\rho(g) \mid g \in G\}. This structure underlies the decomposition of representations into irreducibles, each serving as a building block of joint invariants. For finite-dimensional spaces over an , a commutative family of operators admits a primary decomposition into generalized joint eigenspaces, which are joint subspaces. Consider a of operators T_1, \dots, T_k on a finite-dimensional V. By the simultaneous triangularization , there exists a basis in which all T_i are upper triangular, with diagonal entries forming joint eigenvalues (\lambda_1, \dots, \lambda_k). The space decomposes as V = \bigoplus_{\alpha} V_\alpha, where each V_\alpha is the generalized joint eigenspace for a multi-eigenvalue \alpha = (\lambda_1, \dots, \lambda_k), defined as the of \prod_i (T_i - \lambda_i I)^{n_i} for dimensions n_i \leq \dim V. Each V_\alpha is under all T_i, providing a decomposition into joint components analogous to the for a single .

Fundamental Theorem for Noncommutative Algebras

In the context of noncommutative algebras of linear operators on a , the fundamental theorem on subspaces arises from the structure theory of rings. For a semisimple R acting on a left R- M (where M corresponds to the vector space and submodules to invariant subspaces), M decomposes as a direct sum of simple submodules. These simple submodules are precisely the minimal nonzero invariant subspaces under the action, ensuring that every invariant subspace has an invariant complement. This decomposition guarantees a rich of invariant subspaces, structured by the ring's block form. Wedderburn's theorem extends this to finite-dimensional algebras over a , implying that every finite-dimensional decomposes into a of irreducible representations, each corresponding to an indecomposable subspace. In the noncommutative setting, if the generated by the operators is semisimple, the admits a canonical decomposition mirroring the 's Wedderburn components, even when the operators do not commute. For instance, operators forming a over a yield subspaces isomorphic to those of the underlying simple modules. This framework applies broadly: operators generating a semisimple noncommutative possess a highly structured invariant subspace , with the number and dimensions of minimal invariants determined by the 's matrix block sizes. The Artin–Wedderburn , originally proved by Wedderburn in 1908 for finite-dimensional algebras and generalized by Artin in 1927 to Artinian rings, bridges classical linear —where decompositions rely on —with abstract during the early 20th century developments in noncommutative structures. In contrast, non-semisimple algebras lack such guarantees. The Weyl algebra, a simple infinite-dimensional non-Artinian ring generated by differential and multiplication operators, admits no finite-dimensional nontrivial representations and thus features only trivial invariant subspaces in polynomial module contexts, illustrating the absence of rich decompositions. Commuting operators represent a special abelian semisimple case within this theory.

Algebraic Connections

Relation to Left Ideals

In the theory of a single linear operator T on a finite-dimensional vector space W over a field F, the space W naturally becomes a left module over the polynomial ring F by defining the action of a polynomial p(x) \in F on w \in W as p(T)w. Under this structure, a subspace V \subseteq W is T-invariant if and only if it is a submodule of the F-module W. To each invariant subspace V, one associates the left ideal I_V = \{ p \in F \mid p(T)W \subseteq V \} in F (noting that F is commutative, so left ideals coincide with two-sided ideals). This ideal I_V is the annihilator of the quotient module W/V, and the map V \mapsto I_V provides an order-reversing correspondence between the lattice of T-invariant subspaces of W and the lattice of ideals of F containing the annihilator ideal of W. In particular, V is a maximal proper invariant subspace if and only if I_V is a maximal ideal in F, which occurs precisely when I_V = (f) for some irreducible polynomial f \in F. When W is a cyclic F- (i.e., generated by a single element under the action of F), the correspondence is bijective: every I \subseteq F yields a unique invariant subspace V = IW = \{ p(T)w \mid p \in I, w \in W \}, and in this case, the invariant subspaces take the form V = p(T)W for some p \in F. For example, the of the entire space W (corresponding to the trivial invariant subspace V = W) is the principal ideal generated by the minimal m_T(x) of T, since I_W = \{ p \in F \mid p(T)W = \{0\} \} = (m_T(x)). More generally, if V = \ker T, then I_V consists of all polynomials p such that Tp(T) = 0, which is the principal ideal generated by m_T(x)/\gcd(m_T(x), x) assuming the minimal polynomial is monic. This ideal-theoretic perspective mirrors the lattice of invariant subspaces with that of ideals in F, a principal ideal domain, enabling the use of algebraic tools to classify them. In particular, the primary decomposition theorem decomposes W as a direct sum of primary components \bigoplus_i \ker (q_i(T)^{k_i}), where q_i(x) are distinct irreducible polynomials and k_i \geq 1; each component has annihilator ideal (q_i(x)^{k_i}), and the full lattice of invariant subspaces is the product of the chains of submodules in these cyclic primary modules. This classification via ideals facilitates the rational canonical form and provides a complete description of the invariant subspace lattice.

Invariant Subspaces in Module Theory

In module theory, the concept of an invariant subspace generalizes to that of a fully invariant submodule. Given an R-module M over a ring R, a submodule M' of M is fully invariant if f(M') \subseteq M' for every endomorphism f \in \End_R(M), the endomorphism ring of M. This condition ensures that M' is preserved under the action of all R-linear maps from M to itself, extending the invariance idea from linear operators on vector spaces to more general algebraic structures. A key property arises in the context of finite-length modules. The Krull-Schmidt theorem asserts that any finite-length module decomposes uniquely (up to and permutation of summands) into a of indecomposable modules, where these indecomposables serve as summands under the ring action. This uniqueness facilitates the study of module structure, as fully submodules often align with these decompositions, providing a canonical way to analyze invariance in bounded-length settings. Applications of fully invariant submodules appear prominently in representation theory. For Lie algebra representations, an invariant subspace of a representation on a module corresponds to a subrepresentation, where the submodule remains closed under the Lie algebra action, preserving the bracket relations. Similarly, in group representations, consider a representation \rho: G \to \GL(V) on a module V; a submodule V' is invariant if \rho(g)(V') \subseteq V' for all g \in G, making V' a subrepresentation invariant under the group action. Unlike the case of vector spaces over fields, where every is free and admits a complement, fully submodules in general modules can involve complications due to torsion elements or non-free structures. Torsion modules, for instance, may lack direct summand decompositions for submodules, as the can introduce dependencies not present in field-based settings. This broader framework connects to left ideals in the commutative case, where ideals serve as fully submodules under endomorphisms.

Advanced Topics and Problems

The Invariant Subspace Problem

The invariant subspace problem (ISP) asks whether every bounded linear operator on a complex separable infinite-dimensional Hilbert space admits a nontrivial closed invariant subspace, meaning a closed subspace that is neither the zero subspace nor the entire space and is mapped into itself by the operator. This question, central to operator theory, contrasts with the finite-dimensional case where every operator has eigenvalues and thus invariant one-dimensional subspaces, but extends unresolved into infinite dimensions. The problem traces its origins to the 1930s, when John von Neumann proved unpublished that compact operators on Hilbert spaces have nontrivial invariant subspaces, a result later formalized and extended by Aronszajn and Smith to arbitrary Banach spaces. It gained prominence in the mid-20th century amid efforts to generalize finite-dimensional spectral theory. Affirmative results hold for specific classes: the spectral theorem ensures nontrivial invariant subspaces for normal and self-adjoint operators, as they diagonalize with respect to an orthonormal basis of eigenvectors. Compact operators also possess such subspaces, often via eigenspaces corresponding to nonzero eigenvalues. The classical Volterra integration operator on L^2[0,1], defined by (Vf)(s) = \int_0^s f(t)\, dt, admits a totally ordered lattice of closed invariant subspaces, providing a concrete affirmative example. Partial progress includes the 1970s work of , Chevreau, and Pearcy, who established that polynomially bounded operators—those for which \|p(T)\| \leq C \|p\|_\infty for some constant C and all polynomials p—possess nontrivial hyperinvariant subspaces, invariant under the entire commutant . No counterexamples exist for the case, unlike Banach spaces where Enflo constructed a quasinilpotent without nontrivial closed invariant subspaces in 1973 (published 1987). A negative resolution would challenge generalizations of finite-dimensional theory, while an affirmative one would unify decompositions across classes. The ISP carries implications for , where invariant subspaces under observables or time-evolution operators correspond to conserved quantities like energy levels or symmetries, aiding analysis of systems such as quantization. In partial differential equations, it influences the stability and spectral properties of evolution operators on function spaces, impacting solutions to problems in and dynamical systems. As of 2025, the problem remains open for separable Hilbert spaces, with a 2023 preprint by Per Enflo claiming an affirmative solution but lacking or wide acceptance. Recent advances focus on almost-invariant subspaces and perturbations but no definitive resolution or .

Almost-Invariant Halfspaces

In , particularly within the study of the , almost-invariant halfspaces provide a relaxation of the notion of exact invariance. A closed Y of a X is said to be almost-invariant under a bounded linear T: X \to X if there exists a finite-dimensional F \subseteq X such that T(Y) \subseteq Y + F; the minimal dimension of such an F is called the defect of the almost-invariance. A halfspace is a Y that is both infinite-dimensional and of infinite in X. This concept was introduced to explore weaker forms of invariance in infinite-dimensional settings where exact invariant subspaces may not exist. The motivation for studying almost-invariant halfspaces stems from their role as approximations to true invariant subspaces, offering insights into the unresolved . Every on an infinite-dimensional admits an almost-invariant halfspace, a result established in the late , which contrasts with the existence of counterexamples to the exact problem. In Hilbert spaces, this strengthens to the existence of an almost-invariant halfspace with defect at most 1 for any . These halfspaces relate to the invariant subspace problem by providing a quantitative measure of "near-preservation" under T, where small defects indicate behavior close to invariance. Key properties of almost-invariant halfspaces include the potential for chains or sequences of such subspaces with decreasing defects to approximate exact invariant subspaces in certain limits, facilitating numerical approximations in computational . For instance, in Hilbert spaces without eigenvalues, operators always possess almost-invariant halfspaces of defect 1. An illustrative example arises with the backward on the H^2, where finite-dimensional Toeplitz kernels serve as almost-invariant halfspaces with defect 1, capturing near-preservation despite the lack of nontrivial exact invariants in some cases. Recent developments connect almost-invariant halfspaces to counterexamples like those constructed by Enflo in the , which lack exact invariant subspaces but necessarily admit almost-invariant ones with finite defect. Notably, any operator, including Enflo's, admits a rank-one perturbation that possesses an exact invariant halfspace, highlighting the fragility of non-invariance under finite-rank changes. Ongoing focuses on quantitative bounds for defects and extensions to algebras of operators, with applications in and numerical methods for .

References

  1. [1]
    Invariant Subspaces - A First Course in Linear Algebra
    We do not have any special notation for an invariant subspace, so it is important to recognize that an invariant subspace is always relative to both a ...
  2. [2]
    [PDF] Invariant Subspaces and Complex Analysis - Purdue Math
    Oct 8, 2015 · The eigenspaces of linear transformations are invariant subspaces and play a key ... A theorem from linear algebra states that any two n- ...
  3. [3]
    [PDF] progress on the invariant subspace problem
    Aug 10, 2020 · Definition 1.1. 3. A subspace, W ⊆ V , is invariant in a linear operator, T ∶ V → V , if T(W) ⊆ W. Remark. Granted, not all invariant subspaces ...
  4. [4]
    [PDF] Invariant Subspaces of Compact Operators and Related Topics
    Following is a proof that all compact operators have an invariant subspace. Then the spectrum of compact operators is studied to understand deeper results about.
  5. [5]
    [PDF] Invariant Subspaces - Linear Algebra Done Right
    Definition: invariant subspace. Suppose T ∈ L(V). A subspace U of V is called invariant under T if u ∈ U implies Tu ∈ U. Page 9. Invariant Subspace. Suppose ...
  6. [6]
  7. [7]
    Invariant subspace - StatLect
    Learn how invariant subspaces are defined and how they are used in linear algebra. With detailed explanations, proofs, examples and solved exercises.The eigenspace of an... · Block-triangular matrices · Direct sums of invariant...
  8. [8]
    The Invariant Subspace Lattice of a Linear Transformation
    Nov 20, 2018 · The purpose of this paper is to study the lattice of invariant subspaces of a linear transformation on a finite-dimensional vector space ...
  9. [9]
    Traité des substitutions et des équations algébriques - Internet Archive
    Dec 3, 2008 · Traité des substitutions et des équations algébriques. by: Jordan, Camille, 1838-1922. Publication date: 1870. Topics: Galois theory, Groups ...Missing: invariant subspaces
  10. [10]
    [PDF] Algebraic generality vs arithmetic generality in the controversy ...
    On the one hand, Jordan had stated in his 1870 Traité des substitutions et des équations algébriques a canonical form theorem for substitutions of linear groups ...<|control11|><|separator|>
  11. [11]
    On the invariant subspace problem for Banach spaces - Project Euclid
    Please download the PDF instead. Access the abstract. Per Enflo "On the invariant subspace problem for Banach spaces," Acta Mathematica, Acta Math. 158(none) ...
  12. [12]
    [PDF] Eigenvalues and Eigenvectors 1 Invariant subspaces - UC Davis Math
    Feb 12, 2007 · In this section we are going to study linear maps T : V → V from a vector space to itself. These linear maps are also called operators on V ...
  13. [13]
    Invariant subspace of differentiation operator - Math Stack Exchange
    Aug 13, 2020 · Let D∈L(P(R)) be the differentiation operator, and let U be an invariant subspace of D. Suppose there exists a p∈U with deg p ...Differentiation Operator is not a bounded operator for polynomialsUnderstanding the operator of differentiation on the vector space of ...More results from math.stackexchange.com
  14. [14]
    [PDF] Invariant Subspaces of Nilpotent Linear Operators. I.
    Thus, T is a nilpotent operator on V , and U is an invariant subspace with respect to T . We will discuss the question whether it is possible to classify these ...
  15. [15]
    [PDF] 3 Jordan Canonical Forms - UC Berkeley math
    Let N : V → V be a nilpotent operator on a K-vector space of finite dimension. Then the space can be decomposed into the direct sum of N-invariant subspaces, on ...Missing: rotation | Show results with:rotation
  16. [16]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    ... one-dimensional invariant subspace arises from a vector that the operator maps into a scalar multiple of the vector. This path will lead us to eigenvectors ...
  17. [17]
    Complex Eigenvalues
    Every n × n matrix has exactly n complex eigenvalues, counted with multiplicity. We can compute a corresponding (complex) eigenvector in exactly the same way ...
  18. [18]
    Algebraic and geometric multiplicity of eigenvalues - StatLect
    The geometric multiplicity of an eigenvalue is the dimension of the linear space of its associated eigenvectors (ie, its eigenspace).Algebraic multiplicity · Geometric multiplicity · Relationship between...
  19. [19]
    Matrix Analysis - Cambridge University Press & Assessment
    Matrix Analysis. Matrix Analysis. Matrix Analysis. Search within full text. You ... Horn, The Johns Hopkins University, Charles R. Johnson, College of ...
  20. [20]
    [PDF] Krylov Subspace Methods for the Eigenvalue problem - UCSD CSE
    Krylov subspace methods are very suitable for finding few eigen ... Better to find invariant subspace associated with that cluster of eigenvalues.<|control11|><|separator|>
  21. [21]
    [PDF] Overview of Krylov subspace methods with applications to
    Abstract. This paper gives an overview of projection methods based on Krylov subspaces with emphasis on their application to solving matrix equations that ...
  22. [22]
    What Is an Invariant Subspace? - Nick Higham
    Oct 24, 2023 · \lambda is the eigenvalue corresponding to x . The matrix. \notag \begin{bmatrix} 1 & 1 & 1 \\ 0 &. has a one-dimensional invariant subspace \ ...Missing: eigenspace | Show results with:eigenspace
  23. [23]
  24. [24]
    [PDF] Hyperinvariant subspaces and extended eigenvalues
    of H is a nontrivial invariant subspace ... More generally, a subspace is defined to be invariant for a set of operators if it is invariant for each member of ...
  25. [25]
    [PDF] Simultaneous commutativity of operators - Keith Conrad
    Let {Ai} be a set of commuting linear operators on a finite-dimensional. C-vector space V . If each Ai is diagonalizable on V then they are simultaneously ...
  26. [26]
    Characteristic functions and joint invariant subspaces - ScienceDirect
    Aug 1, 2006 · In particular, we prove that there is a non-trivial joint invariant subspace under the operators T 1 , … , T n , if and only if there is a ...
  27. [27]
    A NOTE ON INVARIANT SUBSPACES - Project Euclid
    If the uniformly closed algebra generated by T and the identity contains a non- zero compact operator S, then T has a proper closed invariant subspace. In ...
  28. [28]
    [PDF] linear operators, invariant subspaces, and polynomials Throughout the
    Apr 14, 2009 · These notes discuss linear operators, invariant subspaces, and polynomials, focusing on presenting operators nicely and understanding why ...
  29. [29]
    [PDF] Quantum Theory I, Recitation 1 Notes - MIT OpenCourseWare
    As we stated, two diagonalizable operators A and B are simultaneously diagonalizable if and only if they commute, [A, B] = 0.
  30. [30]
    [PDF] RES.18-012 (Spring 2022) Lecture 3: Irreducible Representations
    We call the representation on V irreducible if the only G-invariant subspaces are V itself and the trivial subspace. If the representation is not irreducible, ...
  31. [31]
    [PDF] An Advanced Course in Linear Algebra - Jim Brown
    Jul 20, 2015 · i.e., all T-invariant subspaces arise by looking at T-invariant subspaces of ... since they commute they are simultaneously diagonalizable. This ...
  32. [32]
    CS184 Section 1 - People @EECS
    Aug 31, 2025 · Scales and rotations commute only in the special case when scaling by the same amount in all directions. In general the two operations do not ...
  33. [33]
    [PDF] Structure Theorem for Semisimple Rings: Wedderburn-Artin
    Jul 4, 2015 · Theorem 1: If RM is semisimple, then it is a direct sum of some of its simple submodules. Proof: Let T be the set of simple submodules of M.
  34. [34]
    [PDF] Semisimple Modules, Socles, Artinian Rings, Wedderburn's Theorem
    Submodules of 𝑀 are ideals in C[𝑡 ], and an ideal is simple iff it contains no other ideals. But if 𝐼 ≠ 0, 𝑡 𝐼 ⊊ 𝐼 , so (0) is the only simple submodule of 𝑀 .
  35. [35]
    [PDF] A SHORT PROOF OF THE WEDDERBURN-ARTIN THEOREM
    In this form the theorem was proved [1] in 1927 by Emil Artin (1898-1962) generalizing the original 1908 result [4] of Joseph Henry Maclagan Wedderburn.
  36. [36]
    [PDF] Wedderburn-Artin theorem in Group representations
    Mar 22, 2021 · If U ≤ V is a ϕ-invariant subspace of V, then there exists W ≤ V also ϕ-invariant such that V = U ⊕ W. In particular, ϕ is equivalent to ...
  37. [37]
    [PDF] Finite-dimensional Lie Subalgebras of the Weyl Algebra
    We classify up to isomorphism all finite-dimensional Lie algebras that can be realised as Lie subalgebras of the complex Weyl algebra A1 . The list we obtain ...<|separator|>
  38. [38]
    [PDF] Module Theory - The University of Memphis
    If R = F[X] is the polynomial ring over a field F, and V is an R-module given as a vector space and a linear map T : V → V , then submodules are invariant.<|control11|><|separator|>
  39. [39]
    [PDF] LINEAR ALGEBRA NOTES MP274 1991
    what is called a “left F[x]-module”. (So there ... (See Hoffman and Kunze, pages 184–185.) PROOF ... only T–invariant subspaces of V are {0} and V ...
  40. [40]
    [PDF] Integral Closure of Ideals, Rings, and Modules - Purdue Math
    (16) If R is a polynomial ring over a field and I is a monomial ideal, then ... invariant carrying much information about I and even about R. It is.
  41. [41]
    [PDF] Foundations of Module and Ring Theory
    ... book intends to provide an introduction to module theory and the ... (fully invariant submodule). Prove: If M = M1 ⊕ M2, then. K = (K ∩ M1) ⊕ (K ...
  42. [42]
    Rings and Categories of Modules - SpringerLink
    $$49.99 In stock Free deliveryFollowing a brief outline of set-theoretic and categorical foundations, the text begins with the basic definitions and properties of rings, modules and ...
  43. [43]
    [PDF] 11. Representations of Lie groups and Lie algebras
    As usual, a subrepresentation of a representation V is a subspace. W ⊂ V invariant under the G-action (resp. g-action). In this case the quotient space V/W ...
  44. [44]
    [PDF] Representations - Columbia Math Department
    It is a vector subspace of V , in fact a G-invariant subspace (possibly {0}). For example, (Cn)Sn = W1 in the above notation. Definition 2.4.
  45. [45]
    [PDF] Modules and Vector Spaces - Math@LSU
    A subset N ⊆ M is said to be a submodule (or R-submodule) of M if N is a subgroup of the additive group of M that is also an R-module using the scalar ...
  46. [46]
    [PDF] The Invariant Subspace Problem - Nieuw Archief voor Wiskunde
    The solution for Banach spaces. During the annual meeting of the Amer- ican Mathematical Society in Toronto in. 1976, the young Swedish mathematician. Per Enflo ...
  47. [47]
    Invariant Subspaces of Completely Continuous Operators - jstor
    Printed in U.S.A.. INVARIANT SUBSPACES OF COMPLETELY CONTINUOUS OPERATORS'. BY N. ARONSZAJN AND K. T. STMITH. (Received April 8,1953).
  48. [48]
    [PDF] The Spectral Theorem for normal linear maps 1 Self-adjoint or ...
    Mar 14, 2007 · The main result of this section is the Spectral Theorem which states that normal operators are diagonal with respect to an orthonormal basis. We.
  49. [49]
    Invariant subspaces for polynomially bounded operators
    Let T be a polynomially bounded operator on a Banach space X whose spectrum contains the unit circle. Then T ∗ has a nontrivial invariant subspace.Missing: hyperinvariant | Show results with:hyperinvariant
  50. [50]
    [PDF] Invariant Subspace Problem in Hilbert Spaces - arXiv
    Jun 29, 2023 · of a particular set of operators. ... The connection between branch lines and the invariant subspace problem primarily arises in the context of ...<|control11|><|separator|>
  51. [51]
    [PDF] Recent perspectives on the Invariant Subspace Problem - arXiv
    Jul 29, 2025 · The invariant subspace problem is a cornerstone of operator theory, as the existence of invariant sub- spaces provides insight into the ...
  52. [52]
  53. [53]