Fact-checked by Grok 2 weeks ago

Cayley–Hamilton theorem

The Cayley–Hamilton theorem is a fundamental result in linear algebra stating that every over a satisfies its own , meaning that if p(\lambda) = \det(\lambda I - A) is the of an n \times n matrix A, then p(A) = 0. This of degree n annihilates the matrix, providing a polynomial relation A^n + c_{n-1} A^{n-1} + \cdots + c_0 I = 0, where the c_i are the coefficients of p(\lambda) and I is the . The theorem applies to matrices with entries in fields like the real or numbers, as well as more general commutative rings, and serves as a cornerstone for understanding matrix polynomials and spectral properties. Named after mathematicians and , the theorem was independently discovered and proved in the mid-19th century. Hamilton contributed insights through his work on linear operators over quaternions in the 1860s, establishing a general form of the result for such structures. Cayley formalized the theorem for matrices in his seminal publications, including a 1858 memoir on matrix theory where he verified it for small dimensions and postulated its general validity. The full proof for arbitrary dimensions was later provided by Georg Frobenius in 1878, solidifying its place in . The Cayley–Hamilton theorem has wide-ranging applications in matrix computations and theoretical mathematics. It enables efficient calculation of high powers of by reducing them to linear combinations of lower powers via the annihilating , which is essential for numerical algorithms and simulations. In and dynamical systems, it facilitates the analysis of linear time-invariant systems by expressing state transitions using the . Additionally, the theorem underpins methods for computing matrix inverses through the and , as well as evaluating matrix exponentials and functions like the resolvent. These properties extend to advanced topics, including extensions for non-commutative rings and generalizations in on infinite-dimensional spaces.

Statement and Background

Theorem statement

The Cayley–Hamilton theorem asserts that for any n \times n matrix A with entries in a R with identity, the matrix A satisfies its own . The of A is defined as p(\lambda) = \det(\lambda I_n - A), where I_n denotes the n \times n identity matrix, and this is a monic polynomial of degree n in \lambda with coefficients in R. Explicitly, p(\lambda) = \sum_{k=0}^n c_k \lambda^k with c_n = 1, and the theorem states that p(A) = \sum_{k=0}^n c_k A^k = 0, where A^0 = I_n and powers of A are defined via matrix multiplication. This result implies that the minimal polynomial of A, which is the of least degree annihilating A, divides the p(\lambda).

Historical development

The Cayley–Hamilton theorem originated in the mid-19th century amid the emerging theory of matrices and quaternions. introduced the core result in his seminal 1858 memoir, where he proved the theorem for square matrices over the real numbers, specifically stating it for dimensions up to 3×3 and providing a detailed proof for the 2×2 case. This work laid the foundation by linking matrices to their characteristic equations, though Cayley's proof relied on explicit computations limited to small dimensions. William Rowan Hamilton's contributions, in his 1853 Lectures on Quaternions and subsequent work, involved linear operators on spaces and influenced early concepts, with independent discoveries of analogous results for quaternion linear functions that paralleled Cayley's findings. Although Hamilton's quaternion-based explorations predated some of Cayley's applications, Cayley's 1858 publication preceded Hamilton's fuller exposition in 1864, sparking debates on priority; the theorem's dual naming honors both for their intertwined advancements in non-commutative algebra and linear transformations. Ferdinand Frobenius advanced the theorem significantly in 1878 by providing the first general proof for matrices of arbitrary finite dimension over arbitrary fields, extending Cayley's ideas through the introduction of the minimal polynomial and bilinear forms. This generalization addressed limitations in earlier versions restricted to reals or small sizes. In the 20th century, the theorem evolved from classical matrix theory to a fully abstract form applicable to endomorphisms over commutative rings, as formalized in modern algebra texts integrating ring theory and module structures.

Foundational Concepts

Characteristic polynomial

The characteristic polynomial of an n \times n matrix A over a is defined as
p_A(\lambda) = \det(\lambda I_n - A),
where I_n is the n \times n and \lambda is a scalar variable. This yields a of degree n, with leading coefficient 1.
Over an , the roots of p_A(\lambda) = 0 are precisely the eigenvalues of A, counting algebraic multiplicities, and the coefficients of p_A(\lambda) (up to sign) are the elementary symmetric functions of these eigenvalues. In particular, the coefficient of \lambda^{n-1} is -\operatorname{tr}(A), where \operatorname{tr}(A) denotes the of A (the sum of its diagonal entries), and the constant term is (-1)^n \det(A). The characteristic polynomial is invariant under similarity transformations: if B = P^{-1} A P for some P, then p_B(\lambda) = p_A(\lambda). To compute p_A(\lambda), one typically expands the using the cofactor method along a row or column, which is straightforward for small n (e.g., n = 2 or $3) but grows computationally intensive for larger n. For triangular matrices, the characteristic polynomial simplifies to the product \prod_{i=1}^n (\lambda - a_{ii}), where a_{ii} are the diagonal entries. This construction extends to matrices over commutative rings with identity: p_A(\lambda) = \det(\lambda I_n - A) remains a monic polynomial of degree n in the polynomial ring over the base ring, without requiring the existence of eigenvalues. The Cayley–Hamilton theorem asserts that substituting A for \lambda yields the .

Adjugate matrix

The , also known as the classical adjoint, of an n \times n B is defined as the of its cofactor matrix. The cofactor matrix C has entries C_{ij} given by (-1)^{i+j} times the of the submatrix of B obtained by removing the i-th row and j-th column. Thus, the (i,j)-entry of the \operatorname{adj}(B) is C_{ji}. A fundamental property of the is the classical adjugate formula, which states that B \cdot \operatorname{adj}(B) = \operatorname{adj}(B) \cdot B = \det(B) I_n, where I_n is the n \times n . This relation holds for any B over a and is derived from the of the along rows and columns. Consider the specific case where B = \lambda I_n - A for an n \times n A and scalar \lambda. The entries of \operatorname{adj}(\lambda I_n - A) are polynomials in \lambda of degree at most n-1. This follows from the structure of the cofactor expansions, where each cofactor is a of degree n-1 in the entries of \lambda I_n - A. The of A, defined as p_A(\lambda) = \det(\lambda I_n - A), takes the form \lambda^n - (\operatorname{tr} A) \lambda^{n-1} + \cdots + (-1)^n \det(A), where \operatorname{tr} A is the of A, the sum of its diagonal entries. This arises directly from expanding \det(\lambda I_n - A) using properties of determinants, with the coefficient of \lambda^{n-1} being the negative . For computation, consider a $2 \times 2 B = \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}. The adjugate is \operatorname{adj}(B) = \begin{pmatrix} b_{22} & -b_{12} \\ -b_{21} & b_{11} \end{pmatrix}, obtained by transposing the cofactor matrix whose entries are the signed $1 \times 1 minors. Verifying the , B \cdot \operatorname{adj}(B) = \begin{pmatrix} b_{11} b_{22} - b_{12} b_{21} & 0 \\ 0 & b_{11} b_{22} - b_{12} b_{21} \end{pmatrix} = \det(B) I_2.

Illustrative Examples

$1 \times 1 matrices

The Cayley–Hamilton theorem holds in its simplest form for $1 \times 1 matrices, which are scalar matrices over a field. Consider a $1 \times 1 matrix A = , where a is an element of the field. The characteristic polynomial of A is given by p(\lambda) = \det(\lambda I - A) = \lambda - a. Substituting the matrix A into this polynomial yields p(A) = A - aI = - a{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} = [a - a] = {{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, the zero matrix, verifying the theorem directly. This trivial verification illustrates that every scalar a satisfies the equation a - a = 0, which aligns with the theorem's assertion that a matrix satisfies its own characteristic equation. In this context, the $1 \times 1 case links the theorem to the basic properties of field elements, where the matrix structure reduces to scalar arithmetic.

2 × 2 matrices

Consider a general $2 \times 2 A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}. The of A is given by p(\lambda) = \det(A - \lambda I) = \lambda^2 - (a + d)\lambda + (ad - bc), where a + d is the of A and ad - bc is the of A. By the Cayley–Hamilton theorem, substituting A into its characteristic polynomial yields the zero matrix: p(A) = A^2 - (\operatorname{tr} A) A + (\det A) I_2 = 0. To verify explicitly, first compute A^2 = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} a^2 + bc & ab + bd \\ ac + dc & bc + d^2 \end{pmatrix} = \begin{pmatrix} a^2 + bc & b(a + d) \\ c(a + d) & d^2 + bc \end{pmatrix}. Then, p(A) = \begin{pmatrix} a^2 + bc & b(a + d) \\ c(a + d) & d^2 + bc \end{pmatrix} - (a + d) \begin{pmatrix} a & b \\ c & d \end{pmatrix} + (ad - bc) \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. The (1,1) entry is a^2 + bc - (a + d)a + (ad - bc) = a^2 + bc - a^2 - ad + ad - bc = 0. The (1,2) entry is b(a + d) - (a + d)b = 0. Similarly, the (2,1) and (2,2) entries simplify to zero, confirming p(A) is the zero matrix. For a numerical illustration, take A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}. The trace is $1 + 4 = 5 and the determinant is $1 \cdot 4 - 2 \cdot 3 = -2, so p(\lambda) = \lambda^2 - 5\lambda - 2. Now, A^2 = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix}. Then, p(A) = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix} - 5 \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} - 2 \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix} - \begin{pmatrix} 5 & 10 \\ 15 & 20 \end{pmatrix} - \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}, verifying the theorem holds.

Proof Techniques

Direct algebraic proof

The direct algebraic proof of the Cayley–Hamilton theorem employs the adjugate matrix to establish a polynomial identity that, upon formal evaluation at the matrix itself, yields the desired result. Let A be an n \times n matrix over a commutative ring R with identity. The characteristic polynomial is defined as p(\lambda) = \det(\lambda I - A), a monic polynomial of degree n with coefficients in R. By the property of the adjugate matrix, the following identity holds for any scalar \lambda \in R: (\lambda I - A) \operatorname{adj}(\lambda I - A) = p(\lambda) I. The adjugate \operatorname{adj}(\lambda I - A) is itself a matrix whose entries are polynomials in \lambda of degree at most n-1, so it can be expressed as the matrix polynomial \operatorname{adj}(\lambda I - A) = \sum_{k=0}^{n-1} B_k \lambda^k, where each B_k is an n \times n matrix over R. Substituting this expansion into the identity gives a polynomial equation in \lambda that holds identically: (\lambda I - A) \sum_{k=0}^{n-1} B_k \lambda^k = p(\lambda) I. This equation can be viewed in the ring of matrix polynomials over R, where the indeterminate \lambda acts as a central scalar multiple of the identity (i.e., \lambda M = M \lambda for any matrix M). To evaluate at the matrix A, apply the natural evaluation homomorphism from this polynomial ring to the ring of n \times n matrices over R, which sends \lambda to A and preserves multiplication and addition. Under this homomorphism, the left side becomes (A I - A) \sum_{k=0}^{n-1} B_k A^k = 0 \cdot \operatorname{adj}(0) = O, where O is the zero matrix, since A I - A = O. The right side evaluates to p(A) I. Thus, p(A) I = O, which implies p(A) = O, as multiplication by the identity matrix is injective. The formal substitution is valid because the evaluation homomorphism respects the structure of the polynomial ring: the central nature of \lambda I ensures that terms like \lambda I \cdot \sum B_k \lambda^k map to A \sum B_k A^k, without issues from non-commutativity of matrix multiplication, as the indeterminate commutes with all matrix coefficients in the polynomial expressions. This proof assumes R is commutative to ensure the characteristic polynomial has coefficients in R and the determinant is well-defined; it extends directly to fields (where R is a field) and more generally to integral domains, with the result holding in the matrix ring over R.

Proof using polynomial coefficients

One approach to proving the Cayley–Hamilton theorem utilizes the structure of the over the base ring and the to establish the identity formally before substituting the matrix argument. Let R be a with identity, and let A be an n \times n with entries in R. The is defined as p(\lambda) = \det(\lambda I - A) \in R[\lambda]. Consider the matrices over the R[\lambda]. The matrix \lambda I - A has entries in R[\lambda], and its adjugate \operatorname{adj}(\lambda I - A) is an n \times n whose entries are polynomials in \lambda of at most n-1 with coefficients in R. By the fundamental property of the adjugate and , the matrix (\lambda I - A) \operatorname{adj}(\lambda I - A) = p(\lambda) I holds in \operatorname{Mat}_n(R[\lambda]), where I is the n \times n over R[\lambda]. To express this identity in terms of coefficients, write \operatorname{adj}(\lambda I - A) = \sum_{k=0}^{n-1} \lambda^k B_k, where each B_k \in \operatorname{Mat}_n(R). Each B_k is constructed from the entries of A via the cofactor expansion, making B_k a polynomial expression in A with scalar coefficients from R. Consequently, each B_k commutes with A, as both are elements in the commutative subring of \operatorname{Mat}_n(R) generated by A. Substituting into the left side of the identity yields (\lambda I - A) \sum_{k=0}^{n-1} \lambda^k B_k = \sum_{k=0}^{n-1} \lambda^{k+1} B_k - \sum_{k=0}^{n-1} \lambda^k A B_k = \sum_{k=0}^{n-1} \lambda^{k+1} B_k - \sum_{k=0}^{n-1} \lambda^k B_k A, and the commutativity A B_k = B_k A ensures the expansion aligns with the scalar multiple p(\lambda) I on the right side, confirming the coefficient-wise equality in R[\lambda]. Now evaluate the identity at \lambda = A via the natural \phi: R[\lambda] \to \operatorname{Mat}_n(R) defined by \phi(\lambda) = A and fixing elements of R (extended componentwise to matrices). This homomorphism is well-defined because polynomials in \lambda map to polynomials in A, and is preserved. Applying \phi to the left side gives \phi(\lambda I - A) \cdot \phi(\operatorname{adj}(\lambda I - A)) = (A I - A) \operatorname{adj}(A I - A) = 0 \cdot \operatorname{adj}(0) = 0, while the right side becomes p(A) I. Thus, p(A) I = 0, implying p(A) = 0 since I is invertible. The commutativity of the coefficients ensures the substitution is valid without non-commutativity issues.

Proof via endomorphisms

The Cayley–Hamilton theorem can be formulated intrinsically for linear s on finite-dimensional s, without reference to a specific . Let V be a finite-dimensional over a K, with \dim_K V = n < \infty, and let T: V \to V be a linear . The of T is defined as p_T(\lambda) = \det(\lambda I_V - T) \in K[\lambda], where the is computed with respect to any basis of V, as it is independent of the . The theorem asserts that p_T(T) = 0, meaning the induced by evaluating the at T is the zero map on V. One approach to proving this relies on the relationship between the minimal and characteristic polynomials. The minimal polynomial m_T(\lambda) of T is the monic polynomial of least degree such that m_T(T) = 0, and it divides any polynomial that annihilates T. To show that p_T(\lambda) annihilates T, note that the coefficients of p_T(\lambda) are determined by the traces of powers of T via Newton–Girard identities, which relate the elementary symmetric functions (coefficients of the ) to the power sums \operatorname{tr}(T^k) for k = 1, \dots, n. Specifically, these identities imply that p_T(T) satisfies the same linear recurrence as the powers of T up to degree n-1, ensuring annihilation since the space of endomorphisms generated by powers of T has dimension at most n. An alternative proof proceeds by considering the case where T is diagonalizable and then extending to the general case. If T is diagonalizable, there exists a basis of V consisting of eigenvectors with eigenvalues \lambda_1, \dots, \lambda_n \in K, so p_T(\lambda) = \prod_{i=1}^n (\lambda - \lambda_i). In this basis, T acts diagonally, and substituting into the polynomial yields zero on each eigenspace, hence p_T(T) = 0. For the general case, every T is similar to its Jordan canonical form, consisting of Jordan blocks J_k(\lambda) where each block satisfies (J_k(\lambda) - \lambda I)^k = 0; since the characteristic polynomial of a block is (\lambda - \mu)^k for eigenvalue \mu, evaluation at the block gives zero, and thus p_T(T) = 0 overall. A more abstract proof uses the to define the intrinsically. The top exterior power \bigwedge^n V is a one-dimensional K-, and T induces an \bigwedge^n T: \bigwedge^n V \to \bigwedge^n V whose scalar action is multiplication by p_T(1) or related determinants. Extending to the K[\lambda] \otimes_K V, the \lambda I - T acts, and the adjugate derived from multilinear satisfies \operatorname{adj}(\lambda I - T) \cdot (\lambda I - T) = p_T(\lambda) I; evaluating at \lambda = T in the appropriate quotient module yields annihilation.

Abstract algebraic proofs

In the abstract algebraic framework, the Cayley–Hamilton theorem extends to endomorphisms of finitely generated modules over a R with identity. Let M be a finitely generated R- and \varphi: M \to M an R-linear . The theorem states that there exists a P \in R of degree at most the minimal number of generators of M such that P(\varphi) = 0, the zero endomorphism on M. When M is of n, say M = R^n, and \varphi is given by left by an n \times n A with entries in R, this polynomial is precisely the \chi_A(x) = \det(xI_n - A). The arises naturally from the theory of determinantal ideals or Fitting ideals in . Consider M as an R- where x acts via \varphi. Choose a R^n \to M \to 0, where the map is represented by the matrix xI_n - A. The determinantal ideals of this presentation matrix are the ideals generated by its s; in particular, the ideal generated by the n \times n , which is \det(xI_n - A), is the principal (\chi_A(x)) in R. More generally, the Fitting ideals of M as an R- are defined from such presentations: for a presentation with r generators, the k-th Fitting \mathrm{Fit}_k(M) is the ideal generated by all (r - k) \times (r - k) s of the presentation matrix, independent of the choice of presentation. In this setup, \mathrm{Fit}_0(M) is generated by \chi_A(x). A key property is that the zeroth Fitting annihilates the : \mathrm{Fit}_0(M) \subseteq \mathrm{Ann}_{R}(M), so every element of \mathrm{Fit}_0(M) acts as zero on M. Thus, \chi_A(x) annihilates M, implying \chi_A(\varphi) = 0 as an , which is the Cayley–Hamilton theorem. This formulation interprets the Cayley–Hamilton theorem as the statement that the R[\varphi] \subseteq \mathrm{End}_R(M) annihilates M via the . To establish the Fitting ideal annihilation over arbitrary commutative s, one employs localization. Localize the R at a \mathfrak{p} \in \mathrm{Spec}(R), yielding the local R_\mathfrak{p} and localized M_\mathfrak{p}. Over R_\mathfrak{p}, the \varphi extends, and M_\mathfrak{p} is (or projective) of n if M is . The theorem holds over the k(\mathfrak{p}) = R_\mathfrak{p}/\mathfrak{p} R_\mathfrak{p} by the classical case for vector spaces. Since the is monic, its annihilation on M_\mathfrak{p} implies it annihilates M globally, as the result localizes faithfully and the monic condition ensures no denominator issues upon . This localization argument reduces the general case to fields and globalizes via the properties of monic polynomials. The theorem generalizes briefly to projective modules, where the is defined similarly via a locally free presentation, though the proof requires additional care with locally free sheaves or resolutions. Early abstract treatments, such as those building on Frobenius's work, laid groundwork for these module-theoretic views over rings.

Combinatorial proof

The of the Cayley–Hamilton theorem utilizes the to expand the and interprets the substitution into the matrix via a weighted , demonstrating cancellation through a sign-reversing . Consider an n \times n matrix A = (a_{ij}) over a . The is p(\lambda) = \det(\lambda I - A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n \left( \lambda \delta_{i,\sigma(i)} - a_{i,\sigma(i)} \right), where S_n is the on n elements, \operatorname{sgn}(\sigma) is the of the permutation \sigma, and \delta_{i,j} is the . Each product \prod_{i=1}^n \left( \lambda \delta_{i,\sigma(i)} - a_{i,\sigma(i)} \right) expands via the into a of $2^n terms, but many vanish: for any i with \sigma(i) \neq i, the factor \lambda \delta_{i,\sigma(i)} = 0, so the \lambda choice yields zero, leaving only the -a_{i,\sigma(i)} option; for fixed points i = \sigma(i), both choices are possible. Thus, the powers of \lambda arise from the number of fixed points where the \lambda term is selected, and the coefficients involve signed products over the non-fixed points, grouped by the number of such fixed points. To evaluate p(A), interpret A as the weighted adjacency matrix of the complete directed graph G on vertex set = \{1, \dots, n\}, with edge weight a_{ij} from i to j (and loops a_{ii} at vertices). The expansion of p(\lambda) translates to a generating function where powers of \lambda correspond to isolated vertices (treated as trivial fixed points). Substituting A for \lambda yields an expression for the entries of p(A) as signed sums over combinatorial objects covering the graph: for the (i,j)-entry [p(A)]_{ij}, sum over all pairs (P, C) where P is a directed path from i to j (possibly trivial if i=j) using distinct edges, and C is a set of vertex-disjoint directed cycles covering the remaining vertices (using the remaining edges), such that P \cup C uses exactly n edges in total. The weight of such a pair is \operatorname{sgn}(P \cup C) \prod w(e), where w(e) is the weight of edge e, but equivalently \ (-1)^{|C|} \prod w(e) , since the sign from cycles dominates via parity. This arises from multilinearity in the Leibniz expansion, where paths encode matrix powers and cycles encode the signed contributions from permutation cycles. The terms cancel completely via a weight-preserving, sign-reversing \iota on the set of such pairs (P, C). If P contains a directed (a subpath that loops back to an earlier ), \iota detaches this , adding it to C (increasing |C| by 1 and reversing the sign); if P has no cycles but some in C shares an endpoint with P (or can be inserted), \iota merges it into P (decreasing |C| by 1 and reversing the sign). Pairs fixed by \iota—those with no detachable cycles in P and no mergeable cycles in C—must have P as a simple path and C consisting of cycles disjoint from P's endpoints in a way that prevents merging, but such configurations have zero weight because they require impossible edge usages or empty products over non-existent edges. Thus, all non-fixed terms pair into equal-weight opposites that sum to zero, so [p(A)]_{ij} = 0 for all i,j, proving p(A) = 0. The powers of \lambda in the original expansion ensure the covering uses exactly n edges, matching the degree-n polynomial structure. For n=2, label vertices 1 and 2; consider [p(A)]_{11} (others follow similarly). The relevant pairs (P, C) with two edges total are:
  • Trivial path at 1 (P = \emptyset), C = \{(2 \to 2)\}: weight (-1)^1 a_{22} = -a_{22}.
  • Loop path P = (1 \to 1), C = \emptyset: weight (+1) a_{11}.
  • Path P = (1 \to 2 \to 1), C = \emptyset: weight (+1) a_{12} a_{21}.
  • Trivial path at 1, C = \emptyset and an isolated 2, but this uses fewer than 2 edges, so invalid (zero contribution).
The involution pairs the loop (1 \to 1) with the trivial plus loop on 1 (but adjusted: actually, for n=2, the double loop on 1 pairs with itself or cancels internally, but explicit computation shows pairing of the a_{11} term with part of the expansion involving traces). More precisely, the path $1 \to 2 \to 1 is fixed or paired with a cycle merge, but since it contains a "cycle" detour (the full path loops), \iota detaches the cycle $2 \to 1 \to 2 wait—no: for n=2, the path $1 \to 2 \to 1 has a cycle from 2 back to 1, but the involution identifies it as detachable, pairing it with the configuration of path $1 \to 1 (via loop) and empty on 2, but adjusted weights cancel with the -a_{22} term via trace matching. Direct enumeration confirms the sum is a_{11}^2 + a_{12}a_{21} - a_{11}a_{22} - a_{22}a_{11} + \dots = 0 after pairings, aligning with p(A) = A^2 - (\operatorname{tr} A) A + (\det A) I = 0. This explicit cancellation for n=2 illustrates the general mechanism, where the involution ensures no unpaired terms remain.

Practical Applications

Inverse matrices and determinants

The Cayley–Hamilton theorem provides a method to express the of an invertible A as a in A itself, leveraging the fact that A satisfies its own characteristic equation. For an n \times n invertible matrix A with characteristic p(\lambda) = \det(\lambda I - A) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_1 \lambda + a_0, where a_0 = (-1)^n \det A, the theorem states that p(A) = 0. This equation can be rearranged as A^n + a_{n-1} A^{n-1} + \cdots + a_1 A + a_0 I = 0. Multiplying both sides on the left by A^{-1} yields A^{n-1} + a_{n-1} A^{n-2} + \cdots + a_1 I + a_0 A^{-1} = 0, so A^{-1} = -\frac{1}{a_0} (A^{n-1} + a_{n-1} A^{n-2} + \cdots + a_1 I). Since a_0 = (-1)^n \det A, this simplifies to A^{-1} = \frac{(-1)^{n+1}}{\det A} (A^{n-1} + a_{n-1} A^{n-2} + \cdots + a_1 I). This polynomial expression for the inverse directly connects to the classical formula A^{-1} = \frac{1}{\det A} \adj(A), where \adj(A) is the (the transpose of the cofactor matrix of A). The adjugate satisfies \adj(A) A = A \adj(A) = (\det A) I, and its entries are s in the entries of A. The Cayley–Hamilton-derived form reveals that \adj(A) = (-1)^{n+1} a_0 (A^{n-1} + a_{n-1} A^{n-2} + \cdots + a_1 I), linking the constant term a_0 of the explicitly to the : the scalar factor ensures consistency with \det A = (-1)^n a_0. This relation underscores how the theorem bridges eigenvalue information (via the ) with direct computational tools like the adjugate for inversion. For a concrete illustration, consider a $2 \times 2 A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} with \det A = ad - bc \neq 0. The is p(\lambda) = \lambda^2 - (\operatorname{tr} A) \lambda + \det A, where \operatorname{tr} A = a + d. By the Cayley–Hamilton theorem, A^2 - (\operatorname{tr} A) A + (\det A) I = 0. Rearranging gives A (A - (\operatorname{tr} A) I) = - (\det A) I, so A^{-1} = -\frac{1}{\det A} (A - (\operatorname{tr} A) I) = \frac{(\operatorname{tr} A) I - A}{\det A}. This matches the standard adjugate formula, as \adj(A) = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix} = (\operatorname{tr} A) I - A. For example, if A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, then \operatorname{tr} A = 5, \det A = -2, and A^{-1} = \frac{1}{-2} \left( \begin{pmatrix} 5 & 0 \\ 0 & 5 \end{pmatrix} - \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \right) = \begin{pmatrix} -2 & 1 \\ 1.5 & -0.5 \end{pmatrix}, confirming the approach.

Powers of matrices

The Cayley–Hamilton theorem implies that for an n \times n A over a , the p(\lambda) = \det(\lambda I - A) = \lambda^n + c_{n-1} \lambda^{n-1} + \cdots + c_1 \lambda + c_0 satisfies p(A) = 0. This relation directly yields a reduction formula for higher powers of A: multiplying through by A^{k-n} for k \geq n gives A^k + c_{n-1} A^{k-1} + \cdots + c_0 A^{k-n} = 0, or equivalently, A^k = -\sum_{j=0}^{n-1} c_j A^{k-n+j}. Thus, any power A^k with k \geq n can be expressed as a of \{I, A, \dots, A^{n-1}\}, the basis for the generated by A. This reduction is fundamental for simplifying computations involving matrix powers. A key consequence is that sequences derived from powers of A obey linear recurrences dictated by the coefficients. In particular, the sequence u_m = \operatorname{tr}(A^m) for m \geq 0 satisfies the linear recurrence u_m + c_{n-1} u_{m-1} + \cdots + c_0 u_{m-n} = 0 for m \geq n, where the initial terms u_0 = n (since \operatorname{tr}(I) = n) and u_m = \operatorname{tr}(A^m) for $1 \leq m < n are computed directly. Here, the coefficients c_j are the signed elementary symmetric functions of the eigenvalues of A, ensuring the recurrence captures the spectral properties without explicit . This property extends to other sequences, such as entries of A^m, facilitating analysis in dynamical systems. The reduction formula enables efficient computation of A^k for large k by avoiding full matrix exponentiation, which typically requires O(n^3 \log k) operations via . Instead, one expresses A^k = \sum_{j=0}^{n-1} \alpha_j(k) A^j, where the scalar coefficients \{\alpha_j(k)\} satisfy a linear recurrence of order at most n derived from the . These coefficients can be computed iteratively in O(n^2 k) time or faster using matrix methods on the , reducing the overall complexity for structured matrices or when only specific entries are needed. This approach is particularly advantageous in numerical simulations where repeated powers are required. A illustrative example is the companion matrix for the Fibonacci recurrence, A = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}, whose characteristic polynomial is \lambda^2 - \lambda - 1 = 0. By the Cayley–Hamilton theorem, A^2 = A + I. Higher powers follow the pattern A^k = F_k A + F_{k-1} I, where F_k is the k-th number (with F_0 = 0, F_1 = 1). For instance, A^3 = 2A + I and A^4 = 3A + 2I, confirming the reduction to linear combinations of I and A. This demonstrates how the theorem links matrix powers to scalar recurrences, with applications in sequence generation.

Matrix functions

The Cayley–Hamilton theorem enables the definition and computation of matrix functions for an f(\lambda) = \sum_{k=0}^\infty b_k \lambda^k applied to a square A \in \mathbb{C}^{n \times n}, where f(A) = \sum_{k=0}^\infty b_k A^k. Since the theorem states that A satisfies its p_A(\lambda) = \det(\lambda I - A) = 0 when evaluated at A, higher powers A^k for k \geq n can be expressed as linear combinations of \{I, A, \dots, A^{n-1}\}. This reduces the infinite power series to a finite polynomial expression f(A) = \sum_{k=0}^{n-1} d_k A^k, where the coefficients d_k are determined by solving a system derived from the characteristic polynomial or via iterative methods that leverage the theorem's recurrence relations. A prominent example is the matrix exponential e^A = \sum_{k=0}^\infty \frac{A^k}{k!}, which arises in solutions to linear differential equations \dot{x} = Ax. The Cayley–Hamilton theorem truncates this series by providing a linear recurrence for the powers of A, allowing e^A to be computed as a polynomial of degree at most n-1. For instance, if the characteristic polynomial is p_A(\lambda) = \lambda^n + c_{n-1} \lambda^{n-1} + \cdots + c_0, then A^n = -c_{n-1} A^{n-1} - \cdots - c_0 I, and subsequent powers follow recursively, enabling efficient numerical evaluation without infinite summation. This approach is particularly useful for systems and , where the exponential forms the . The Cayley–Hamilton theorem, often proved using the Jordan canonical form, implies that f(A) commutes with the minimal polynomial of A, as f(A) lies in the generated by A, ensuring consistency across the matrix's eigenspaces and generalized eigenspaces in the Jordan decomposition. As a numerical illustration, consider the $2 \times 2 S = \theta \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}, whose is \lambda^2 + \theta^2 = 0, so S^2 = -\theta^2 I by the theorem. The exponential e^S = \sum_{k=0}^\infty \frac{S^k}{k!} reduces via even and odd powers: even terms yield \cos \theta \cdot I and odd terms yield \frac{\sin \theta}{\theta} S (for \theta \neq 0), resulting in the e^S = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}. This demonstrates the theorem's role in deriving closed-form expressions for functions on structured matrices.

Connections to algebraic number theory

The companion matrix of a monic polynomial p(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_0 over the integers is the n \times n matrix C = \begin{pmatrix} 0 & 0 & \cdots & 0 & -a_0 \\ 1 & 0 & \cdots & 0 & -a_1 \\ 0 & 1 & \cdots & 0 & -a_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & -a_{n-1} \end{pmatrix}, and by direct computation, p(C) = 0. This construction links the Cayley–Hamilton theorem to algebraic integers, as the companion matrix represents multiplication by a root \alpha of p in the basis \{1, \alpha, \dots, \alpha^{n-1}\} of the extension field \mathbb{Q}(\alpha)/\mathbb{Q}. In , the embeds the \mathcal{O}_K of a number field K into matrices over \mathbb{Z}. For an \alpha \in \mathcal{O}_K with minimal polynomial m(\lambda) of degree k = [\mathbb{Q}(\alpha):\mathbb{Q}], consider the extension L/K of degree d, so [L:\mathbb{Q}] = dk. The matrix of multiplication by \alpha on L as a \mathbb{Q}-, with respect to a suitable basis, has m(\lambda)^d. By the Cayley–Hamilton theorem, this matrix satisfies its , implying m(\alpha)^d = 0 in the endomorphism ring, which aligns with \alpha satisfying m(\alpha) = 0. In the simple case where L = \mathbb{Q}(\alpha), the coincides with the minimal polynomial. These matrix representations facilitate computations of arithmetic invariants in number fields. The trace \operatorname{Tr}_{L/\mathbb{Q}}(\alpha) is the trace of the multiplication matrix, and the norm N_{L/\mathbb{Q}}(\alpha) is its determinant, both of which are integers when \alpha is an . Moreover, the Cayley–Hamilton theorem aids in studying factorization in rings over \mathcal{O}_K, where the provides relations that mirror ideal factorizations in the base ring. A concrete example arises in the quadratic field K = \mathbb{Q}(\sqrt{2}), with ring of integers \mathcal{O}_K = \mathbb{Z}[\sqrt{2}] and basis \{1, \sqrt{2}\}. Multiplication by \alpha = \sqrt{2}, whose minimal polynomial is m(\lambda) = \lambda^2 - 2, yields the matrix \begin{pmatrix} 0 & 2 \\ 1 & 0 \end{pmatrix} with respect to this basis; its characteristic polynomial is \lambda^2 - 2, and the Cayley–Hamilton theorem confirms that substituting the matrix gives the zero matrix. The norm N_{K/\mathbb{Q}}(\sqrt{2}) = -2 and trace \operatorname{Tr}_{K/\mathbb{Q}}(\sqrt{2}) = 0 follow directly from the determinant and trace of this matrix.

Extensions and Generalizations

Generalizations to non-commutative settings

The Cayley–Hamilton theorem extends to matrices over s, such as the quaternions, when the characteristic polynomial is defined using the , a of the that takes values in the of the by its . With this definition, every square matrix over a satisfies its own , mirroring the commutative case but accounting for non-commutativity through row and column reductions that preserve the determinant's properties. This formulation ensures the theorem holds because modules over s are , allowing proofs via adjointness and identities adapted to the non-commutative setting. In the specific case of quaternionic matrices, which formed the original context for William Rowan 's investigations into linear s on quaternion spaces, the theorem applies with careful handling of left or right eigenvalues due to non-commutativity. demonstrated that the inverse of such an operator can be expressed as a in the operator itself, prefiguring the annihilating property of the . For matrices up to order 3, explicit characteristic functions whose roots are the left eigenvalues satisfy the Cayley–Hamilton relation, and this extends generally via the Dieudonné for higher dimensions. However, the theorem does not hold in full generality over arbitrary non-commutative rings lacking conditions like centrality of the scalars; counterexamples arise where a matrix fails to satisfy a monic polynomial of degree equal to its size, as the standard characteristic polynomial cannot be unambiguously defined. In such rings, matrices may only satisfy weaker identities, such as those derived from universal polynomial relations, but not the precise Cayley–Hamilton form. Paré and Schelter showed that every matrix over a non-commutative ring satisfies some monic polynomial identity of degree at most n, but it need not coincide with a characteristic polynomial. Connections to the appear in higher-dimensional polynomial maps, where the Cayley–Hamilton theorem aids in proving membership results for of d, providing expressions for in terms of generators that support injectivity conditions under constant assumptions.

Applications to linear operators

In the context of bounded linear operators on a , the Cayley–Hamilton theorem admits extensions for specific classes where an analogue of the can be formulated using the and resolvent operators. For trace-class operators, which are compact with summable eigenvalues, the decomposes the operator T via a spectral measure, enabling a generalized annihilating . A related analogue holds for closed self-adjoint operators with trace-class resolvent, where the resolvent's trace-class nature allows construction of a similar annihilating expression. These formulations provide tools for and resolvent estimates in infinite-dimensional settings. Finite-rank operators and their perturbations offer another avenue for applying the theorem through finite-dimensional approximations. A finite-rank K of n on a maps into an n-dimensional subspace, and the algebra generated by K is finite-dimensional, implying that K is annihilated by a of degree at most n, directly analogous to the Cayley–Hamilton theorem. Compact operators, being strong limits of finite-rank operators, can thus be approximated such that the annihilating polynomials of the converge appropriately, facilitating numerical and theoretical analysis of spectral properties. In the study of operators, the Cayley–Hamilton theorem applies via the (Frobenius) operator associated with linear differential equations (ODEs). For an n-th order linear homogeneous ODE y^{(n)} + a_{n-1} y^{(n-1)} + \cdots + a_0 y = 0, the A has p(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_0, and by the Cayley–Hamilton theorem, p(A) = 0. This matrix arises in the state-space realization \dot{x} = A x, where the state transition semigroup satisfies the same characteristic equation, enabling explicit solutions and stability analysis through matrix exponentials reduced via the theorem. Such representations extend to Banach algebras, where the resolvent of the companion operator admits a series expansion tied to the annihilating polynomial. A illustration arises with the unilateral S on the sequence space \ell^2(\mathbb{N}), defined by S(e_k) = e_{k+1} for the standard \{e_k\}. This does not satisfy any non-constant equation, as its is the closed unit disk and powers S^k remain linearly independent. However, the n \times n S_n, a nilpotent Jordan block with 1's on the superdiagonal, has \lambda^n and satisfies S_n^n = 0 by the Cayley–Hamilton theorem. These finite truncations approximate S on finite-dimensional subspaces, demonstrating how the theorem aids in analyzing infinite-dimensional operators through dimensional reduction.

Remarks on limitations and variants

The Cayley–Hamilton theorem is fundamentally limited to square matrices over commutative rings with identity, as the characteristic polynomial is defined only for endomorphisms of free modules of equal rank; for non-square matrices, no analogous characteristic polynomial exists, though extensions have been proposed in generalized linear algebra. Similarly, the theorem requires the base ring to be commutative, as non-commutative rings lead to complications where matrices may not satisfy the standard characteristic equation without modifications, such as in semirings or Lie algebras. The theorem applies specifically to the monic characteristic polynomial; for non-monic polynomials, a matrix does not necessarily annihilate it unless the polynomial is a multiple of the minimal polynomial. A notable variant is due to Ferdinand Frobenius, who proved the general case of the theorem and established that the minimal polynomial of a divides its , providing a refined version particularly relevant for singular matrices where the constant term vanishes but the relation still holds. Generalizations extend the theorem to multi-linear settings, such as the generalized Cayley–Hamilton theorem for isotropic tensor fields, where tensor contractions satisfy analogous characteristic-like relations in applications like . A common misconception arises from a fallacious proof attempting to substitute the matrix A directly into the determinant expression for the , yielding \det(AI - A) = \det(0) = 0; this fails because the does not define a that permits such matrix substitution, as the entries involve non-commuting matrix variables and the is a scalar in a single indeterminate. The full remains an open problem, with connections to the Cayley–Hamilton theorem appearing in approaches that use it to express formal inverses of maps in terms of Jacobian ideals, potentially implying that maps with nonzero Jacobian determinant admit inverses.

References

  1. [1]
    Cayley-Hamilton Theorem -- from Wolfram MathWorld
    The Cayley-Hamilton theorem states that an n×n matrix A is annihilated by its characteristic polynomial det(xI-A), which is monic of degree n.
  2. [2]
    Cayley-Hamilton theorem - StatLect
    The Cayley-Hamilton theorem shows that the characteristic polynomial of a square matrix is identically equal to zero when it is transformed into a polynomial ...
  3. [3]
    Cayley-Hamilton theorem - PKC - Obsidian Publish
    It is named after the mathematicians Arthur Cayley and William Rowan Hamilton, who independently discovered and proved the theorem in the mid-19th century.
  4. [4]
    Linear Operators and the `Cayley-Hamilton Theorem'
    By William R. Hamilton. Hamilton wrote three short papers on linear operators on quaternions, showing how the inverse of such an operator can be expressed as a ...<|control11|><|separator|>
  5. [5]
    [PDF] A Brief History of Linear Algebra - University of Utah Math Dept.
    Simply stated, a square matrix satisfies its characteristic equation. Cayley's efforts were published in two papers, one in 1850 and the other in 1858. His ...
  6. [6]
    [PDF] The Cayley-Hamilton Theorem via the Z-Transform
    The general case was proved by Frobenius. [3] in 1878. The Cayley-Hamilton Theorem has many uses in mathematics. For example, it is used to compute the inverse ...
  7. [7]
    [PDF] Computing the Matrix Exponential The Cayley-Hamilton Method 1
    The Cayley-Hamilton theorem states that every matrix satisfies its own characteristic equation, that is. ∆(A) ≡ [0] where [0] is the null matrix. (Note ...<|control11|><|separator|>
  8. [8]
    On the generalized Cayley-Hamilton theorem - IEEE Xplore
    On the generalized Cayley-Hamilton theorem. Abstract: The Cayley-Hamilton theorem is extended to the case where two A and B n \times n matrices are involved.<|control11|><|separator|>
  9. [9]
    [PDF] The Cayley–Hamilton Theorem - Brooklyn College
    Mar 23, 2016 · The Cayley–. Hamilton Theorem asserts that if one substitutes A for λ in this polynomial, then one obtains the zero matrix. This result is true ...
  10. [10]
    [PDF] Linear Algebra Abridged
    Mar 13, 2016 · here using linear algebra is appropriate for a linear algebra textbook. ... In the previous chapter, we proved the Cayley–Hamilton Theorem (8.37).
  11. [11]
    II. A memoir on the theory of matrices - Journals
    A memoir on the theory of matrices. Arthur Cayley. Google Scholar · Find ... Crilly T (1978) Cayley's anticipation of a generalised Cayley-Hamilton theorem ...
  12. [12]
    Who, between Cayley and Hamilton, first worked on the theorem that ...
    Apr 3, 2022 · The first paper on the subject, (1858), by Arthur Cayley, "A Memoir on the Theory of Matrices", Philos Trans 148, preceding Hamilton's 1864 paper.Missing: original | Show results with:original
  13. [13]
    [PDF] universal identities - keith conrad
    By using the viewpoint of (universal) polynomial identities we have proved the Cayley-Hamilton theorem in one stroke in all commutative rings (not just fields) ...<|control11|><|separator|>
  14. [14]
  15. [15]
    [PDF] Computing Characteristic Polynomials from Eigenvalues - Ilse Ipsen
    If the inputs are eigenvalues of a matrix, then the elementary symmetric functions are, up to a sign, equal to the coefficients of the characteristic polynomial ...
  16. [16]
    The Characteristic Polynomial
    The characteristic polynomial of A is the function f ( λ ) given by f ( λ )= det ( A − λ I n ) . We will see below that the characteristic polynomial is in ...
  17. [17]
    Minimal and Characteristic Polynomials and Eigenvalues (MATRICES)
    Given a square matrix A over a ring R, return the characteristic polynomial of A. This is defined to be the monic univariate polynomial Det(x - A)∈R[x] where R ...
  18. [18]
    [PDF] Determinant and the Adjugate
    The adjugate (or classical adjoint) of A, denoted by adj A, is defined as the transpose of the matrix of cofactors,. adjA = CT . (24). That is, the matrix ...
  19. [19]
    [PDF] On the adjugate of a matrix - CMUP
    Let |λI − A| = λn + cn−1 λn−1 + ··· + c1λ + c0 be the characteristic polynomial of an n-by-n matrix A over a given field K. The elegant proof of the ...
  20. [20]
    [PDF] The Characteristic Polynomial
    The characteristic polynomial, p(λ), is the nth order polynomial obtained by evaluating the determinant of (A - λI), where A is a matrix.Missing: properties | Show results with:properties
  21. [21]
    [PDF] 18.700 LECTURE NOTES, 11/12/04 Contents 1. Resultants 1 2 ...
    ... 1x1−1 = 1x0 = 1. ... Hamilton theorem as a sequence of identities of ... (A). Theorem 6.4 (The CayleyHamilton theorem). Every entry of the generic Cayley. Hamilton ...
  22. [22]
    [PDF] The Characteristic Polynomial
    The solution to this problem consists of identifying all possible values of λ (called the eigenvalues), and the corresponding non-zero vectors kv (called ...<|control11|><|separator|>
  23. [23]
    [PDF] the cayley-hamilton and jordan normal form theorems
    Sep 8, 2017 · Our first proof of the Cayley-Hamilton Theorem, originally found in Axler's Linear Algebra Done Right, is founded on an extension of the basic ...
  24. [24]
    [PDF] The trace Cayley-Hamilton theorem
    The adjugate of A as a polynomial in A. Next, let us show that the adjugate of a square matrix A is a polynomial in A (with coefficients that depend on A ...
  25. [25]
    [PDF] Lecture 12 Jordan canonical form - EE263
    Cayley-Hamilton theorem: for any A ∈ R n×n we have X(A)=0, where. X(s) = det ... Proof of C-H theorem first assume A is diagonalizable: T−1AT = Λ. X(s)=( ...
  26. [26]
    [PDF] 28. Exterior powers
    We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in ...
  27. [27]
    Section 10.16 (05G6): Cayley-Hamilton—The Stacks project
    10.16 Cayley-Hamilton · 1. Let R be a ring. Let A = (a_{ij}) be an n \times n matrix with coefficients in R. Let P(x) \in R[x] be the characteristic polynomial ...
  28. [28]
    [PDF] Commutative Algebra - MIT
    Cayley–Hamilton Theorem . . . . . . . . . . . . . 161. 11. Localization of ... to the case that R is a field by localizing at p and passing to the residue rings.
  29. [29]
    Section 15.8 (07Z6): Fitting ideals—The Stacks project
    The Fitting ideals of a finite module are the ideals determined by the construction of Lemma 15.8.2.Missing: Cayley- Hamilton
  30. [30]
  31. [31]
    [PDF] Math 154. Generalized Cayley-Hamilton and integrality
    We continue to let A be a commutative ring. Our aim is to prove the following generalization of the Cayley-Hamilton theorem. Theorem 2.1. For any µ = (aij) ...
  32. [32]
  33. [33]
  34. [34]
  35. [35]
    On the trace of powers of square matrices - ResearchGate
    Jan 8, 2021 · Using Cayley-Hamilton equation for matrices, we obtain a simple formula for trace of powers of a square matrix. The formula becomes simpler in particular cases.Missing: recurrence | Show results with:recurrence<|separator|>
  36. [36]
    Find the Formula for the Power of a Matrix Using Linear Recurrence ...
    We find the formula for the power of a matrix using a linear recurrence relation. By Cayley-Hamilton theorem, the matrix satisfies the polynomial.
  37. [37]
    [PDF] complex matrices, similarities and the cayley-hamilton
    Thus the adjoint is found by taking the complex conjugate of all the matrix elements and then take the transpose or, what amounts to the same, the transpose and ...Missing: adjugate | Show results with:adjugate
  38. [38]
    [PDF] Constructing Rotation Matrices using Power Series - Geometric Tools
    Dec 11, 2007 · The power series representations extend to functions whose inputs are square matrices rather than scalars, taking us into the realm of matrix ...
  39. [39]
    [PDF] Math 345 – Algebraic number theory
    the n × n zero matrix (Cayley-Hamilton). In the special case when. A ... sees easily that Mα is the companion matrix. Mα =........ 0. 0.
  40. [40]
    [PDF] Math 129: Number Fields
    Cayley–Hamilton Theorem. Definition 1.1. A number field is a field K of characteristic zero (i.e. K ⊇ Q) such that K is a finite dimensional vector space ...
  41. [41]
    [PDF] The minimal polynomial and some applications - Keith Conrad
    We say the linear operator A: V → V is diagonalizable when it admits a diagonal matrix representation with respect to some basis of V : there is a basis B of V.
  42. [42]
    [PDF] Matrix Theory: Addition of algebraic integers - Boris Bukh
    In this note we show that algebraic integers are closed under addition. We treat the special case of √ 2 + √ 3 first, and give a proof of a general next.
  43. [43]
    [PDF] Advanced Course on Quasideterminants and Universal Localization
    The most famous and widely used noncommutaive determinant is the. Dieudonne determinant. It was defined for matrices over a division ring R by. J. Dieudonne in ...
  44. [44]
  45. [45]
    [PDF] arXiv:math/0208146v4 [math.QA] 6 Aug 2004
    It was defined for matrices over a division ring R by J. Dieudonne in. 1943 [D]. His idea was to consider determinants with values in R∗/[R∗,R∗] where R∗.
  46. [46]
    On linear systems and noncommutative rings | Theory of Computing ...
    Cite this article. Sontag, E.D. On linear systems and noncommutative rings. Math. Systems Theory 9, 327–344 (1975). https://doi.org/10.1007/BF01715359.
  47. [47]
    [PDF] Cayley-Hamilton theorem for matrices over an arbitrary ring
    In the general case (when R is an arbitrary non-commutative ring) Paré and Schelter proved (see [4]) that any matrix A ∈ Mn(R) satisfies a monic identity in ...
  48. [48]
    On the Jacobian Conjecture and ideal membership for degree $d
    Nov 6, 2021 · Using the Cayley-Hamilton theorem, we provide expressions for these elements in terms of the generators of that ideal.
  49. [49]
    "Extensions of the Cayley-Hamilton Theorem with Applications to ...
    In this thesis, we show that the Cayley-Hamilton Theorem can be extended to self-adjoint trace-class operators and to closed self-adjoint operators with trace- ...
  50. [50]
    The Cayley–Hamilton Theorem and Resolvent Representation
    Apr 1, 2024 · For the Frobenius matrix accompanying an algebraic (differential) equation in a complex Banach algebra, the Cayley–Hamilton theorem is proved, ...
  51. [51]
    [PDF] From DK-STP to Non-square General Linear Algebra and ... - arXiv
    Jul 2, 2023 · The Cayley-Hamilton theorem has also be extended to non-square matrices. Finally, a Lie bracket is defined to produce a Lie algebra, called non- ...
  52. [52]
    How did Frobenius prove the Cayley-Hamilton theorem for a general ...
    Dec 9, 2016 · The concept of the minimal polynomial was introduced by Frobenius in his landmark 1878 paper. He used the language of bilinear forms and the ...
  53. [53]
    [PDF] arXiv:1001.4478v1 [nucl-th] 25 Jan 2010
    Jan 25, 2010 · 6,8we introduced the Generalized Cayley-. Hamilton (GCH) theorem for tensor fields, and we used it as a tool to study general forms of local ...
  54. [54]
    A bogus Cayley-Hamilton proof? [duplicate] - Math Stack Exchange
    Mar 18, 2020 · Below are two supposed proofs for the Cayley-Hamilton theorem. The first is a bogus proof; I would really appreciate comments as to whether my ...Missing: combinatorial | Show results with:combinatorial
  55. [55]
    [PDF] On the Jacobian Conjecture and ideal membership for degree d ...
    Nov 6, 2021 · Using the Cayley-Hamilton theorem, we provide expressions for these elements in terms of the generators of that ideal.