Fact-checked by Grok 2 weeks ago

Nilpotent matrix

In linear algebra, a nilpotent matrix is a square N such that N^k = 0 (the ) for some positive integer k, with the smallest such k called the index of nilpotency. This property is equivalent to all eigenvalues of N being zero. Nilpotent matrices play a fundamental role in the study of linear transformations, particularly in understanding non-diagonalizable operators and the structure of the Jordan canonical form. Key properties of nilpotent matrices include their singularity, as they are never invertible, and the fact that their and all higher power traces are zero. The minimal of a nilpotent matrix of k is precisely x^k, and the kernels of its successive powers form a strictly ascending chain: \ker(N) \subsetneq \ker(N^2) \subsetneq \cdots \subsetneq \ker(N^k) = \mathbb{R}^n (or the underlying ), with the satisfying k \leq n where n is the matrix . Except for the , nilpotent matrices are not diagonalizable, as the geometric multiplicity of the eigenvalue zero is strictly less than its algebraic multiplicity. Nilpotent matrices arise naturally in applications such as differential equations, , and the analysis of Lie algebras, where they model "decaying" or "vanishing" behaviors under iteration. For instance, the of a x^k is nilpotent of index k, and in the Jordan form, nilpotent blocks correspond to the off-diagonal structure for eigenvalue zero.

Fundamentals

Definition

In linear algebra, a nilpotent matrix is defined as a square matrix A over a field, such as the real numbers \mathbb{R} or complex numbers \mathbb{C}, for which there exists a positive integer k such that A^k equals the zero matrix of the same size. This condition implies that repeated matrix multiplication eventually yields the zero matrix, where all entries are zero. The prerequisite for this concept is familiarity with and the formation of matrix powers, where A^k denotes the matrix obtained by multiplying A by itself k times. When k = 1, the matrix A is the itself, which trivially satisfies the nilpotency condition. The smallest positive k for which A^k = 0 is known as the index of nilpotency of A, often denoted \operatorname{nil}(A) = k. This index provides a measure of how "deep" the nilpotency is, though its detailed properties are explored further in related topics.

Index of Nilpotency

The index of nilpotency of a nilpotent matrix A, often denoted \operatorname{nil}(A), is the smallest positive m such that A^m = [0](/page/0) but A^{m-1} \neq [0](/page/0). For an n \times n nilpotent matrix A, the index satisfies $1 \leq m \leq n. The lower bound holds since A^1 = A \neq [0](/page/0) for non-zero nilpotent matrices (with the conventionally having index 1). To prove the upper bound m \leq n, suppose m > n. Then there exists a x such that A^{m-1} x \neq [0](/page/0), and the vectors x, Ax, A^2 x, \dots, A^{m-1} x are m > n vectors in an n-dimensional . These vectors are linearly : if \sum_{i=0}^{m-1} c_i A^i x = [0](/page/0), applying A^{m-1-j} for each j yields a triangular system implying all c_i = [0](/page/0), using A^m = [0](/page/0). This contradicts linear independence in \mathbb{R}^n or \mathbb{C}^n, so m \leq n. The index can be determined algorithmically by computing successive matrix powers A, A^2, A^3, \dots until the zero matrix is obtained, which is feasible for small n since at most n powers are needed. The index also coincides with the smallest m where the kernel of A^m equals the full space, as detailed in the geometric aspects.

Examples

Zero Matrix and Trivial Cases

The zero matrix serves as the prototypical example of a nilpotent matrix, possessing an index of nilpotency equal to 1, since A = 0 implies A^1 = 0. In this case, the equation $0^k = 0 holds for any integer k \geq 1, reflecting the immediate collapse to the zero matrix upon any positive power. Trivial cases of nilpotency are confined to square matrices, as non-square matrices lack a well-defined notion of matrix powers leading to the of compatible dimensions. Moreover, any scalar multiple of the remains the itself, preserving the index of 1 without introducing new structure. Unique properties of the as a nilpotent operator include the equation AX = 0 holding for all vectors X in the domain, meaning its kernel spans the entire space. Its image is the trivial subspace \{0\}, underscoring the complete absence of non-zero invariant directions. These features distinguish the from non-trivial nilpotents, where higher powers are required to reach zero.

Strict Upper Triangular Matrices

A strict upper triangular matrix is an n \times n over a (such as or numbers) whose entries are all zero on the and below it, meaning a_{ij} = 0 whenever i \geq j. Every such is , with its index of nilpotency being at most n. This follows from the fact that repeated shifts the non-zero entries further above the diagonal until the entire becomes the after at most n multiplications. To illustrate, consider the $2 \times 2 strict upper triangular matrix A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}. Multiplying gives A^2 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}, so A is nilpotent with index 2. In general, for a strict upper triangular matrix A, the k-th power A^k has zeros in all positions on or below the k-th superdiagonal (i.e., (A^k)_{ij} = 0 whenever i > j - k), and thus A^n = 0. This property holds regardless of the specific non-zero values above the diagonal, as the multiplication process only involves terms from sufficiently distant superdiagonals. For a concrete $3 \times 3 example, take A = \begin{pmatrix} 0 & a & b \\ 0 & 0 & c \\ 0 & 0 & 0 \end{pmatrix}, where a, b, c are arbitrary scalars. Direct computation yields A^2 = \begin{pmatrix} 0 & 0 & ac \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \quad A^3 = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, confirming nilpotency with index at most 3.

Jordan Block Examples

A Jordan block for a nilpotent matrix, denoted J_m(0), is an m \times m matrix consisting of zeros on the and ones on the superdiagonal, with all other entries zero. This structure represents the for a on a cyclic of m. For example, consider the 3×3 Jordan block: J_3(0) = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}. The square of this matrix is J_3(0)^2 = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \neq 0, while the cube is the : J_3(0)^3 = 0. Thus, the index of nilpotency for this block is 3. In general, raising a Jordan block J_m(0) to a power k shifts the ones k positions up from the superdiagonal, filling the k-th superdiagonal with ones while zeros occupy the rest; this process continues until k = m, at which point the matrix becomes the . The index of nilpotency is therefore exactly m, the size of the block. For a nilpotent matrix expressed as the direct sum of multiple blocks, the overall index of nilpotency is the size of the largest block, as the powers of the direct sum act independently on each block.

Algebraic Characterization

Eigenvalues and Trace

A nilpotent matrix has all its eigenvalues equal to zero. Suppose A is an n \times n nilpotent matrix, so A^k = 0 for some positive integer k. If \lambda is an eigenvalue of A with corresponding nonzero eigenvector v, then Av = \lambda v. Iterating this gives A^k v = \lambda^k v. But A^k v = 0, so \lambda^k v = 0. Since v \neq 0, it follows that \lambda^k = 0, hence \lambda = 0. The characteristic polynomial of an n \times n nilpotent matrix A is therefore p_A(\lambda) = \det(\lambda I - A) = \lambda^n, or equivalently \det(A - \lambda I) = (-\lambda)^n. This holds because the characteristic polynomial is monic of degree n with roots given by the eigenvalues (with algebraic multiplicity), all of which are zero. The trace of a nilpotent matrix A, defined as \operatorname{tr}(A) = \sum_{i=1}^n a_{ii}, is zero, as it equals the sum of the eigenvalues (with multiplicity). This can also be seen from the characteristic polynomial, where the coefficient of \lambda^{n-1} is -\operatorname{tr}(A), and that coefficient vanishes in \lambda^n. Moreover, \operatorname{tr}(A^k) = 0 for every positive integer k, since the eigenvalues of A^k are \lambda^k = 0.

Polynomials and Similarity Invariants

For a nilpotent matrix A \in M_n(\mathbb{C}), the minimal polynomial m_A(\lambda) is the monic polynomial of least degree such that m_A(A) = 0, and it takes the form m_A(\lambda) = \lambda^m, where m is the index of nilpotency of A, defined as the smallest positive integer such that A^m = 0. This polynomial divides any other monic polynomial p(\lambda) that annihilates A, meaning p(A) = 0 implies m_A(\lambda) divides p(\lambda) in \mathbb{C}[\lambda]. Since the only root of m_A(\lambda) is \lambda = 0 with multiplicity m, the minimal polynomial fully captures the nilpotency structure in terms of polynomial annihilation. The \chi_A(\lambda) = \det(\lambda I - A) of a matrix A is always \chi_A(\lambda) = \lambda^n, reflecting that all eigenvalues are zero. As a consequence, the minimal polynomial m_A(\lambda) divides the , so m \leq n, and the two coincide if and only if the index of nilpotency equals the matrix dimension. Both the minimal and s are invariants under similarity transformations: if B = P^{-1} A P, then m_B(\lambda) = m_A(\lambda) and \chi_B(\lambda) = \chi_A(\lambda), providing key algebraic signatures that distinguish nilpotent matrices from others with the same . By the Cayley-Hamilton theorem, every satisfies its own , so for a A, \chi_A(A) = A^n = 0, which bounds the of nilpotency by n. This application underscores how the theorem enforces the nilpotency condition algebraically, ensuring that no matrix requires a higher power than n to reach zero.

Geometric Aspects

Ascending Kernel Sequence

For a matrix A acting on a finite-dimensional \mathbb{F}^n, the ascending kernel sequence is defined as the chain of subspaces K_i = \ker(A^i) for i = 0, 1, 2, \dots, where K_0 = \{0\}. Since A is with of nilpotency m, it follows that K_m = \mathbb{F}^n and K_i = \mathbb{F}^n for all i \geq m. This sequence forms a strictly ascending chain of subspaces: K_0 \subsetneq K_1 \subsetneq \cdots \subsetneq K_{m-1} \subsetneq K_m = \mathbb{F}^n. Consequently, the dimensions satisfy \dim K_i < \dim K_{i+1} for $0 \leq i < m, with each successive dimension increasing by at least 1 until stabilization at \dim K_m = n. The index m is precisely the smallest integer such that K_m = \mathbb{F}^n, which aligns with the smallest power where A^m = 0. The growth of these dimensions is given by \dim K_i = \sum_{j=1}^i r_j for i \leq m, where r_j = \dim K_j - \dim K_{j-1} (with r_0 = 0) represents the number of Jordan blocks of size at least j in the Jordan canonical form of A. These r_j form the conjugate partition associated to the partition of n given by the sizes of the Jordan blocks for the nilpotent matrix (detailed further in the canonical forms section).

Flag of Subspaces

For a nilpotent matrix A \in M_n(\mathbb{F}) acting on the vector space V = \mathbb{F}^n, a flag of subspaces is a chain of nested A-invariant subspaces \{0\} = V_0 \subset V_1 \subset \cdots \subset V_k = V. One standard construction uses the ascending chain of kernels, defining V_i = \ker(A^i) for i = 0, 1, \dots, m, where m is the index of nilpotency of A (the smallest positive integer such that A^m = 0). Each subspace V_i in this flag is A-invariant, since if v \in V_i, then A^i v = 0, so A^{i-1} (A v) = 0 and thus A v \in V_{i-1}. In general, A(V_i) \subseteq V_{i-1}. The length of the flag is m+1, directly tied to the index m. The dimensions satisfy \dim V_i - \dim V_{i-1} forming the parts of a partition of n that characterizes the nilpotent matrix up to similarity. This kernel-based flag extends the ascending kernel sequence by providing a complete chain of invariant subspaces up to the full space V. For the cyclic case, where A admits a cyclic vector (spanning V under powers of A), an equivalent flag can be constructed using successive images: let v be a cyclic vector with \{v, Av, \dots, A^{n-1}v\} a basis for V, and set V_i = \operatorname{span}\{A^{n-i} v, A^{n-i+1} v, \dots, A^{n-1} v\} = \operatorname{im}(A^{n-i}). Here, A(V_i) \subseteq V_{i-1}, and the flag is complete with \dim V_i = i. As an example, consider the $2 \times 2 nilpotent matrix A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} over \mathbb{F}, which has index m=2. The flag is \{0\} \subset V_1 = \ker A = \operatorname{span}\{(1,0)^T\} \subset V_2 = \mathbb{F}^2, with dimensions increasing by 1 each step. This is a complete flag, and A maps V_2 into V_1 while A(V_1) = \{0\} = V_0.

Canonical Forms

Jordan Canonical Form

A nilpotent matrix A \in M_n(\mathbb{C}) (or over an algebraically closed field) is similar to a unique Jordan canonical form J, up to permutation of blocks, consisting of Jordan blocks with eigenvalue 0. Specifically, there exists an invertible matrix P such that A = P J P^{-1}, where J is block diagonal with nilpotent Jordan blocks J_{m_j}(0) along the diagonal, and the block sizes m_1 \geq m_2 \geq \cdots \geq m_r > 0 form a partition of n, the dimension of the space. Each block J_m(0) is an m \times m matrix with zeros on the and ones on the superdiagonal, representing the action of a on a cyclic of m. The full form J is the \bigoplus_{j=1}^r J_{m_j}(0), capturing the decomposition of the underlying into invariant cyclic subspaces under the action of A. For instance, a matrix of index 3 on a 3-dimensional space has Jordan form J_3(0) = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}, as this single block satisfies J_3(0)^3 = 0 but J_3(0)^2 \neq 0. The nilpotency index of A, denoted \mathrm{nil}(A), equals the smallest k such that A^k = [0](/page/0), and it coincides with the size of the largest block m_1. Thus, the structure of J reflects the "depth" of nilpotency in A, with smaller blocks corresponding to shorter chains in the generalized eigenspace for eigenvalue , which is the entire . The block sizes are determined by similarity invariants, specifically the dimensions of the kernels of powers of A, \dim \ker(A^i) for i = 1, \dots, k. The Weyr characteristic \phi(i) = \dim \ker(A^i) - \dim \ker(A^{i-1}) (with \dim \ker(A^0) = 0) gives the number of Jordan blocks of size at least i, while the conjugate yields the Segre , the of block sizes \{m_j\}. For example, if \dim \ker(A) = 2, \dim \ker(A^2) = 4, and \dim \ker(A^3) = 5 for n=5, then \phi(1) = 2, \phi(2) = 2, \phi(3) = 1, implying blocks of sizes 3 and 2. These differences uniquely fix the Jordan structure without computing P explicitly.

Similarity to Nilpotent Forms

To determine a that converts a matrix A to its canonical form, an algorithmic approach relies on constructing a basis through the ascending sequence of kernels of powers of A. Since A is with index m (where A^m = 0 but A^{m-1} \neq 0), the entire space is the generalized eigenspace for eigenvalue 0, and the kernels form a : \ker(A) \subseteq \ker(A^2) \subseteq \cdots \subseteq \ker(A^m) = V. The process begins by computing these kernels successively. Start with a basis for \ker(A), which consists of eigenvectors (vectors v satisfying Av = 0). Extend this basis to one for \ker(A^2) by finding vectors in \ker(A^2) but not in \ker(A); these form the starting points for longer chains. Continue this for \ker(A^3) \setminus \ker(A^2), and so on, up to \ker(A^m) \setminus \ker(A^{m-1}). The dimensions of these quotients, \dim(\ker(A^{k}) / \ker(A^{k-1})), determine the number of Jordan blocks of length at least k. For each such extension vector w_k in the quotient at level k, generate the chain by repeated application of A: set w_{k-1} = A w_k, w_{k-2} = A^2 w_k, \dots, w_1 = A^{k-1} w_k \neq 0. This yields a Jordan chain \{w_1, w_2, \dots, w_k\} where A w_1 = 0 and A w_j = w_{j-1} for j = 2, \dots, k, corresponding to a single Jordan block. The union of all such chains forms a basis for V. The similarity matrix P is formed by taking the columns of P as these basis vectors from the , ordered appropriately (e.g., for a v, Av, A^2 v, \dots, A^{l-1} v = 0 where v is chosen such that A^l v = 0 but A^{l-1} v \neq 0, the columns are A^{l-1} v, A^{l-2} v, \dots, Av, v). Then, P^{-1} A P = J, where J is the form consisting of blocks along the diagonal. This is unique up to the ordering () of the blocks, as the block structure is and determined by the dimensions. An alternative is the , which for matrices over algebraically closed fields yields an equivalent block structure via matrices of powers of x, but the form is preferred for its explicit shift representation.

Properties

Commutativity and Products

The sum of two nilpotent matrices is not necessarily nilpotent. For example, consider the 2×2 matrices over the real numbers given by A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \quad B = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}. Both A and B are nilpotent with index 2, since A^2 = B^2 = 0. However, their sum is A + B = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, which satisfies (A + B)^2 = I_2, the 2×2 identity matrix, and thus is not nilpotent. In contrast, the product of two nilpotent matrices need not be nilpotent in general, but it is nilpotent if the matrices commute. Specifically, if A and B are nilpotent n \times n matrices over a field with AB = BA, then AB is also nilpotent. Moreover, if the index of nilpotency of A is p (so A^p = 0 but A^{p-1} \neq 0) and the index of B is q, then the index of AB satisfies \iota(AB) \leq pq. This follows because (AB)^{pq} = A^{pq} B^{pq} = 0 under the commutation assumption, as the powers can be rearranged freely. The [A, B] = AB - BA of two matrices A and B is not necessarily . For instance, certain trace-zero matrices that are not , such as diagonal matrices with entries (1, -1, [0](/page/0), \dots, 0), can be expressed as commutators of matrices.

Powers and Norms

A matrix A satisfies A^\nu = [0](/page/0) for some positive integer \nu, known as the index of nilpotency, implying that the powers A^k = [0](/page/0) for all k \geq \nu. Consequently, for any \|\cdot\|, \|A^k\| = [0](/page/0) for k \geq \nu, and thus \|A^k\| \to [0](/page/0) as k \to \infty. The \rho(A) of a nilpotent matrix is zero, as all its eigenvalues are zero. By Gelfand's formula, \rho(A) = \lim_{k \to \infty} \|A^k\|^{1/k} for any , so this limit equals zero, reinforcing that the norms of the powers decay to zero. For the induced by a vector norm, submultiplicativity yields the bound \|A^k\| \leq \|A\|^k. However, the index \nu provides a tighter bound: \|A^k\| = 0 for k \geq \nu, which is sharper than \|A\|^k when \|A\| \geq 1. The Frobenius norm \|A^k\|_F also tends to zero as k \to \infty, since the matrix powers converge to the zero matrix. More precisely, \|A^{k+1}\|_F \leq \|A\|_2 \|A^k\|_F, where \|A\|_2 is the spectral norm, ensuring the sequence decreases toward zero until it reaches exactly zero at the index.

Generalizations and Applications

Nilpotent Operators on Vector Spaces

A linear operator T: V \to V on a finite-dimensional vector space V over a field F is defined to be nilpotent if there exists a positive integer k such that T^k = 0, where $0 denotes the zero operator on V. The smallest such positive integer k is called the index of nilpotency of T. This condition is intrinsic to the operator and does not depend on the choice of basis for V. Any on a finite-dimensional admits a with respect to a basis of V, and the nilpotency of the operator is equivalent to the nilpotency of its . Specifically, T is nilpotent if and only if its matrix A satisfies A^k = 0 for some positive k, and this equivalence holds because matrix representations in different bases are similar, preserving powers of the operator. Consequently, the of nilpotency is independent of the basis chosen, as it coincides with the minimal k such that the corresponding matrix power is zero. In infinite-dimensional vector spaces, the definition of a nilpotent operator remains the same—T^k = 0 for some positive integer k—but such operators are less common, and finite-dimensional approximations like truncated shift operators may appear nilpotent while the full operator does not. For example, the unilateral shift operator on the space \ell^2(\mathbb{N}), defined by (S(x_0, x_1, x_2, \dots))_n = x_{n-1} for n \geq 1 with (S(x))_0 = 0, is not nilpotent, as its powers are isometries and never the zero operator. However, on finite-dimensional spaces, nilpotency always implies the existence of a nilpotent matrix representation, often realized in Jordan canonical form consisting of Jordan blocks with zero eigenvalues.

Uses in Lie Theory and Differential Equations

In Lie theory, nilpotent Lie algebras play a central role in understanding the structure of solvable Lie algebras and their representations. A Lie algebra \mathfrak{g} is nilpotent if its lower central series terminates at the zero algebra after finitely many steps, meaning [\mathfrak{g}, [\mathfrak{g}, \dots, [\mathfrak{g}, \mathfrak{g}] \dots ]] = \{0\} for some finite nesting. The Heisenberg algebra provides a prototypical example of a non-abelian nilpotent Lie algebra of dimension 3, generated by basis elements X, Y, Z satisfying [X, Y] = Z and [X, Z] = [Y, Z] = 0, where the center is spanned by Z and the derived algebra equals the center. Engel's theorem offers a key classification result for nilpotent algebras over fields of zero, stating that a algebra \mathfrak{g} is nilpotent if and only if the \mathrm{ad}_x: \mathfrak{g} \to \mathfrak{g} is nilpotent for every x \in \mathfrak{g}. This equivalence highlights the connection between algebraic nilpotency and the nilpotency of associated linear operators, allowing simultaneous upper triangularization of representations with on the diagonal. The theorem facilitates the study of representations where nilpotent elements act via nilpotent , providing insight into the solvable structure underlying more complex groups. In the context of differential equations, matrices arise in the analysis of linear systems \dot{x} = A x, where A is with A^m = 0 for some index m. The fundamental matrix solution is the matrix exponential e^{tA}, which truncates to a finite series: e^{tA} = I + tA + \frac{t^2}{2!} A^2 + \cdots + \frac{t^{m-1}}{(m-1)!} A^{m-1}, yielding explicit solutions as in t of degree at most m-1. These solutions can be constructed using chains, which form a basis of generalized eigenvectors for the eigenvalue 0, revealing the polynomial growth behavior inherent to the nilpotent structure. Nilpotent systems also appear , particularly in assessing of linear time-invariant systems \dot{x} = A x + B u. The Brunovsky canonical form transforms controllable pairs (A, B) into a block-diagonal structure consisting of Jordan blocks of sizes determined by the controllability indices, ensuring full reachability when the indices to the . This form underscores the role of nilpotency in decomposing systems into chains where inputs propagate through nilpotent shifts, enabling design for stabilization and tracking.

References

  1. [1]
    Nilpotent Matrix -- from Wolfram MathWorld
    There are two equivalent definitions for a nilpotent matrix. 1. A square matrix whose eigenvalues are all 0. 2. A square matrix A such that A^n is the zero ...
  2. [2]
    Nilpotent matrix - StatLect
    A square matrix is said to be nilpotent if, by rasing it to a sufficiently high integer power, we get the zero matrix as a result.
  3. [3]
    SCLA Nilpotent Linear Transformations
    We will discover that nilpotent linear transformations are the essential obstacle in a non-diagonalizable linear transformation.<|control11|><|separator|>
  4. [4]
    [PDF] Nilpotent Operators - Linear Algebra Done Right
    Nilpotent Operators. Page 2. Definition and Examples of Nilpotent Operator. Definition: nilpotent. An operator is called nilpotent if some power of it equals 0 ...
  5. [5]
  6. [6]
    [PDF] Finding “nonobvious” nilpotent matrices
    Oct 31, 2005 · We close this article with the promised short proof that an n×n nilpotent matrix has index at most n. If B is an n × n nilpotent matrix of index ...
  7. [7]
    [PDF] Nilpotents Leave No Trace: A Matrix Mystery for Pandemic Times
    Jan 1, 2022 · Every nilpotent matrix N is singular and has additional standard properties. For instance, N has trace zero, as do all its powers.
  8. [8]
    [PDF] DIMENSIONS OF NILPOTENT ALGEBRAS - Digital WPI
    Mar 14, 2021 · By elementary linear algebra, all eigenvalues of a nilpotent matrix are zero. The 0- eigenspace of a matrix M is precisely the null-space of ...
  9. [9]
    [PDF] Homework solutions for Math 242, Linear Algebra, Lehigh University ...
    Prove that strictly upper triangular matrices are nilpotent. We will prove, by induction, that if A is strictly upper triangular then Ak ij = 0 for i>j − ...
  10. [10]
    [PDF] The Jordan-Chevalley decomposition - The University of Chicago
    Sep 8, 2014 · For n × n strictly upper-triangular matrix A in a field kn, An = 0 i.e. A is nilpotent. Proof. Let A be a strictly upper-triangular matrix over ...
  11. [11]
    [PDF] The Jordan Canonical Form of a Nilpotent Matrix Math 422 Schur's ...
    Dec 7, 2010 · Definition 4 A nilpotent Jordan block is a matrix of the form. 2. 6. 6 ... Example 8 Let us determine the Jordan structure and JCF of the ...
  12. [12]
    [PDF] Lecture 11: Proving the Jordan Decomposition Theorem
    A linear transformation T is nilpotent if there exists some m ≥ 0 such that T m = 0. aThe Jordan block Jm(0) is nilpotent with exponent m. Proof. This ...
  13. [13]
    [PDF] Math 102 HW# 7
    Nov 12, 2019 · An n × n matrix is said to be nilpotent if Ak = 0 for some positive integer k. Show that all eigenvalues of a nilpotent matrix are 0. Proof.
  14. [14]
    Definition NLT Nilpotent Linear Transformation
    Each of these matrices has at least one eigenvalue with geometric multiplicity strictly less than its algebraic multiplicity, and therefore Theorem DMFE tells ...
  15. [15]
    [PDF] Math 206C: Algebra
    (c) Prove that the trace of a nilpotent n × n matrix with entries in F is 0. Solution: We showed in lecture that the trace of A is the negative of the xn-1 ...Missing: zero | Show results with:zero
  16. [16]
    [PDF] The minimal polynomial and some applications - Keith Conrad
    We can't diagonalize a nilpotent operator except if it is O: a minimal polynomial of the form Tk has distinct roots only when it is T, and the only operator ...
  17. [17]
    [PDF] The Cayley-Hamilton Theorem and the Jordan Decomposition
    If N is m-nilpotent, then its minimal polynomial is. mN (x) = xm . Proof. By ... If N ∈ L (V,V ) is a nilpotent transfomation, there exists a basis for V such ...
  18. [18]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    Sheldon Axler received his undergraduate degree from Princeton University, followed by a PhD in mathematics from the University of California at Berkeley.
  19. [19]
    [PDF] 3 Jordan Canonical Forms - UC Berkeley math
    Let N : V → V be a nilpotent operator on a K-vector space of finite dimension. Then the space can be decomposed into the direct sum of N-invariant subspaces, on ...
  20. [20]
    [PDF] Linear Algebra
    Our original purpose in writing this book was to provide a text for the under graduate linear algebra course at the Massachusetts Institute of Technology.
  21. [21]
    [PDF] The Jordan Canonical Form - Princeton Math
    Such a matrix is called a Jordan block. Notice that in the decomposition (10), the matrix of N on Vl, with respect to the basis described in Theorem 8, is ...
  22. [22]
    None
    ### Summary of Jordan Normal Form Proof (Nilpotent Case, Kernel Sequences, Chains, Similarity Matrix P)
  23. [23]
    Jordan chain - StatLect
    A Jordan chain is a set of generalized eigenvectors that are obtained by repeatedly applying a nilpotent operator to the same vector.
  24. [24]
    [PDF] I Jordan canonical form I generalized modes I Cayley ... - EE263
    any matrix A 2Rn n can be put in Jordan canonical form by a similarity transformation, i.e. ... I N is the nilpotent matrix with ones on the 1st upper diagonal.
  25. [25]
    [PDF] rational canonical and jordan forms - UMD MATH
    It follows from the proposition that any nilpotent T is similar to a matrix having blocks ... We say a matrix in the above form is in Jordan canonical form.
  26. [26]
    [PDF] STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2022 ...
    Oct 18, 2022 · • there are some relationships between the norm of a matrix and its spectral radius. • the easiest one is that ρ(A) ≤ ∥A∥ for any matrix norm ...
  27. [27]
    [PDF] an elementary proof of the spectral radius formula for matrices
    But such a matrix is nilpotent: placing a large exponent on Jk yields the zero matrix. The norm of a zero matrix is zero, so we have lim n→∞. ‖Jn k ‖1/n ...
  28. [28]
    [PDF] Linear Algebra
    Our original purpose in writing this book was to provide a text for the under graduate linear algebra course at the Massachusetts Institute of Technology.
  29. [29]
    [PDF] arXiv:2107.11603v1 [math.FA] 24 Jul 2021
    Jul 24, 2021 · So the unilateral shift operator on ℓ2(N+) is obviously not quasi-nilpotent operator. ... That is B is an infinite dimensional bounded diagonal ...
  30. [30]
    A class of nilpotent Lie algebras admitting a compact subgroup of ...
    1.1. Heisenberg algebra, Wirtinger derivatives, and Weyl algebra. Heisenberg Lie algebras are the most elementary non-Abelian Lie algebras. Such a Lie algebra ...
  31. [31]
    Nilpotent Lie Algebra -- from Wolfram MathWorld
    A Lie algebra is nilpotent when its Lie algebra lower central series g_k vanishes for some k. Any nilpotent Lie algebra is also solvable.
  32. [32]
    [PDF] Nilpotent Lie Algebras and Engel's Theorem
    Since strictly upper triangular matrices correspond to nilpotent linear maps, it follows that ad w is nilpotent for each w ∈ g′. Therefore the restriction ad w| ...
  33. [33]
    [PDF] Lecture 2 - Fundamental definitions, and Engel's Theorem - Penn Math
    Sep 11, 2012 · Definition An element x ∈ L is called ad-nilpotent if (adx)n = 0 for some n. A Lie algebra L is called ad-nilpotent if every element of L is ad ...
  34. [34]
    5.8: Matrix exponentials - Mathematics LibreTexts
    May 31, 2023 · Computation of the matrix exponential for nilpotent matrices is easy by just writing down the first \(k\) terms of the Taylor series.Definition · Simple cases · General Matrices · Example \(\PageIndex{1}\)
  35. [35]
    [PDF] 5.7 Matrix Exponentials and Linear Systems 407
    ) Thus the matrix A - rI is nilpotent, and it follows that the exponential series here terminating because of Eq. (31). In this way, we can rather easily ...
  36. [36]
    [PDF] Mathematical Control Theory - Sontag Lab
    The book covers what constitutes the common core of control theory: The al- gebraic theory of linear systems, including controllability, observability, feedback.
  37. [37]
    A General Theorem on Local Controllability
    We prove a general sufficient condition for local controllability of a nonlinear system at an equilibrium point. Earlier results of Brunovsky, Hermes, ...