Fact-checked by Grok 2 weeks ago

Idempotent matrix

In linear algebra, an idempotent is a square A that satisfies the equation A^2 = A, meaning it remains unchanged when multiplied by itself. This property defines a periodic of period 1 and is fundamental in various applications, such as representing projections onto vector subspaces. Idempotent matrices exhibit several key properties that highlight their structure and utility. For instance, the of an idempotent matrix equals its , providing a direct link between its diagonal sum and the of its . If the matrix is symmetric, its eigenvalues are restricted to 0 or 1, with the multiplicity of 1 corresponding to the . Moreover, the only nonsingular idempotent matrix is the , and for any idempotent A, the matrix I - A is also idempotent, where I is the identity. These characteristics make idempotent matrices essential in statistical modeling, such as in regression where projection matrices are idempotent, and in for studying rings and modules.

Fundamentals

Definition

In linear algebra, matrices are rectangular arrays of elements from a , such as or numbers, and square matrices are those with an equal number of rows and columns. Matrix combines two such arrays to produce another , where the entry in the resulting matrix is computed as a linear combination of elements from the input matrices using the standard rules of row-by-column dot products. A A over a is called idempotent if it satisfies the equation A^2 = A, meaning that multiplying the matrix by itself yields the original matrix. Equivalently, this condition can be expressed as A(A - I) = 0, where I is the of the same size and the is the matrix with all entries equal to zero. This generalizes beyond fields to matrices over commutative s with , where an e in the ring is idempotent if e^2 = e, and thus a matrix A with entries in such a ring is idempotent if A^2 = A. The term "idempotent" originates from , deriving from Latin meaning "the same ," referring to elements unchanged under squaring, and was first introduced in this sense by in 1870; its application to appears in early 20th-century texts, such as A. A. Albert's 1938 work defining a E as idempotent if E^2 = E.

Characterization

A square A over a is idempotent its minimal polynomial divides x(x-1). This follows directly from the defining A^2 = A, which implies that A is annihilated by the polynomial x^2 - x = x(x-1), and the minimal polynomial is the of least degree with this property. Another equivalent characterization is that the image of A, denoted \operatorname{im}(A), equals the set of fixed points \{x \mid Ax = x\}. To see this, note that for any x, we can decompose x = Ax + (x - Ax), where Ax \in \operatorname{im}(A) and A(x - Ax) = Ax - A^2x = 0, so x - Ax \in \ker(A); moreover, on \operatorname{im}(A), A acts as the identity since Ay = A(Az) = A^2z = Az = y for y = Az. The eigenvalues of an idempotent matrix A are necessarily 0 or 1. To prove this, suppose \lambda is an eigenvalue with eigenvector v \neq 0, so Av = \lambda v. Then A^2 v = A(\lambda v) = \lambda Av = \lambda^2 v, but A^2 = A implies \lambda v = \lambda^2 v, hence \lambda(\lambda - 1)v = 0, so \lambda = 0 or \lambda = 1. This non-spectral argument relies only on the idempotence condition and holds over any field. Over algebraically closed fields of characteristic not equal to 2, the Jordan canonical form of an idempotent matrix consists solely of 1×1 Jordan blocks with eigenvalues 0 or 1, meaning A is diagonalizable. This follows because the minimal polynomial divides x(x-1), which has distinct roots, ensuring no larger Jordan blocks. In characteristic 2, the minimal polynomial divides x(x+1) instead, and while the eigenvalues remain 0 or 1 (noting $1 = -1), diagonalizability requires the roots to be distinct in the splitting field, with additional caveats for finite fields where the field may not contain both roots.

Examples

Basic Examples

The of order n \times n provides a trivial example of an idempotent matrix, as its square equals itself: O^2 = O. The I_n of order n \times n is the canonical full-rank idempotent matrix, satisfying I_n^2 = I_n. Any whose diagonal entries are restricted to 0 or 1 is idempotent, since the square of such a matrix yields the same diagonal entries (as $0^2 = 0 and $1^2 = 1) with off-diagonal entries remaining zero. For instance, the $3 \times 3 matrix \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} is idempotent. A basic non-diagonal example is the $2 \times 2 matrix A = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}. To verify, compute the square: A^2 = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 1 \cdot 1 + 1 \cdot 0 & 1 \cdot 1 + 1 \cdot 0 \\ 0 \cdot 1 + 0 \cdot 0 & 0 \cdot 1 + 0 \cdot 0 \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} = A. Thus, A is idempotent. Such examples frequently correspond to linear projections onto subspaces, illustrating the geometric intuition behind idempotence.

Real 2×2 Case

Over the real numbers, every 2×2 idempotent matrix is similar to one of the following diagonal matrices: the zero matrix \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}, the rank-1 matrix \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, or the identity matrix \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. This follows from the fact that idempotent matrices are diagonalizable, as their minimal polynomial divides x(x-1), which has distinct linear factors. For a general 2×2 A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} satisfying A^2 = A, the possible cases are distinguished by . The has 0 and 0. The has 2 and 2. Non-trivial idempotent matrices of have 1 (equal to the ) and 0. There are no other rank-deficient non-zero cases. For -1 matrices, the entries satisfy a + d = 1 and bc = a(1 - a), ensuring \det(A) = ad - bc = 0. The eigenvalues are 0 and 1. Geometrically, a rank-1 idempotent matrix represents an oblique projection onto a 1-dimensional (a line through the ) of \mathbb{R}^2, where points on the are fixed and the complementary direction is mapped to zero. A concrete example is A = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}, which satisfies A^2 = \begin{pmatrix} 0.5 \cdot 0.5 + 0.5 \cdot 0.5 & 0.5 \cdot 0.5 + 0.5 \cdot 0.5 \\ 0.5 \cdot 0.5 + 0.5 \cdot 0.5 & 0.5 \cdot 0.5 + 0.5 \cdot 0.5 \end{pmatrix} = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix} = A. This matrix projects orthogonally onto the of (1,1), as it is symmetric.

Properties

Algebraic Properties

An idempotent matrix A satisfies A^2 = A, and by induction, higher powers follow the relation A^k = A for all integers k \geq 1. The zeroth power is defined as A^0 = I, the of appropriate dimension. Idempotent matrices are singular except in the case where A = I. This follows algebraically from the determinant equation \det(A) = \det(A^2) = [\det(A)]^2, which implies \det(A) [\det(A) - 1] = 0, so \det(A) = 0 or $1. If \det(A) = 1, then A is invertible, and multiplying the idempotence relation by A^{-1} yields A = I. The product of two idempotent matrices A and B is idempotent if A and B commute, that is, if AB = BA. In this case, (AB)^2 = ABAB = A(BA)B = A(AB)B = A^2 B^2 = AB. However, non-commuting idempotent matrices may yield a product that is not idempotent; for instance, consider A = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad B = \begin{pmatrix} 0 & 1 \\ 0 & 1 \end{pmatrix}. Both are idempotent, but AB = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} and (AB)^2 = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} \neq AB. An idempotent matrix has no two-sided inverse unless it is the identity matrix, owing to its singularity for all other cases. For symmetric idempotent matrices, which correspond to orthogonal projections, the matrix itself acts as its own Moore-Penrose pseudoinverse. Since the minimal polynomial of an idempotent matrix A divides x(x-1), evaluation of any p(x) at A simplifies to the p(A) = p(0) I + [p(1) - p(0)] A. This relation holds because higher-degree terms reduce via the condition.

Spectral Properties

An idempotent matrix A satisfies A^2 = A, and its eigenvalues are restricted to the set \{0, 1\}. This follows from the fact that if \lambda is an eigenvalue with eigenvector v \neq 0, then A^2 v = A v implies \lambda^2 v = \lambda v, so \lambda(\lambda - 1) = 0. The algebraic multiplicity of the eigenvalue 1 equals the dimension of its eigenspace, which consists of the fixed points of A. Every idempotent matrix is diagonalizable over algebraically closed fields such as the complex numbers. In such a , A is similar to a with 1's and 0's on the diagonal, where the number of 1's equals the multiplicity of the eigenvalue 1. This diagonalizability arises because the minimal polynomial of A divides x(x-1), which splits into distinct linear factors over the complex numbers and thus has no repeated roots. The eigenspace corresponding to the eigenvalue 0 is precisely the kernel of A, \ker(A), while the eigenspace for the eigenvalue 1 is the image of A, \operatorname{im}(A). For any vector v, A v = v if and only if v \in \operatorname{im}(A), confirming that \operatorname{im}(A) captures the fixed points. Moreover, the underlying vector space decomposes as the direct sum \mathbb{R}^n = \operatorname{im}(A) \oplus \ker(A), ensuring that every vector can be uniquely expressed as a sum of components from these eigenspaces.

Trace and Rank Relations

For an idempotent matrix A \in \mathbb{C}^{n \times n}, the \operatorname{tr}(A) equals the \operatorname{rank}(A). This relation arises because the eigenvalues of A are restricted to 0 or 1, and the is the sum of these eigenvalues, which counts the multiplicity of the eigenvalue 1, while the is the of the of A, coinciding with the geometric multiplicity of eigenvalue 1. By the rank-nullity theorem, \operatorname{rank}(A) + \operatorname{nullity}(A) = n, where n is the matrix dimension. For an idempotent matrix, the nullity is the dimension of the , which equals the multiplicity of the eigenvalue 0, so \operatorname{nullity}(A) = n - \operatorname{tr}(A). The determinant of an idempotent matrix satisfies \det(A) = 0 if \operatorname{rank}(A) < n, since at least one eigenvalue is 0, or \det(A) = 1 if A = I_n, the identity matrix, where all eigenvalues are 1. To see the trace-rank relation, consider the spectral decomposition of A, assuming it is diagonalizable for simplicity (the result holds more generally via ). Then A = PDP^{-1} where D = \operatorname{diag}(\lambda_1, \dots, \lambda_n) with each \lambda_i \in \{0, 1\}, so \operatorname{tr}(A) = \sum \lambda_i, the number of 1's. The rank equals the number of nonzero eigenvalues, hence \operatorname{tr}(A) = \operatorname{rank}(A). The eigenvalues are 0 or 1 because if Ax = \lambda x with x \neq 0, then A^2 x = \lambda^2 x = \lambda x, so \lambda^2 = \lambda, yielding \lambda(\lambda - 1) = 0. For example, a rank-k idempotent matrix has trace k. Consider the diagonal matrix \operatorname{diag}(1, \dots, 1, 0, \dots, 0) with k ones; it is idempotent, has trace k, and rank k. Any idempotent matrix is similar to this form, preserving trace and rank.

Advanced Relations

Connections to Projections

In linear algebra, an idempotent matrix A represents the matrix of a linear projection operator T on a vector space V, where T^2 = T, meaning T projects V onto its image \operatorname{im}(T) along its kernel \ker(T), with the direct sum decomposition V = \operatorname{im}(T) \oplus \ker(T). This characterization holds because for any vector v \in V, T(v) lies in \operatorname{im}(T), and applying T again leaves it unchanged, while vectors in \ker(T) are mapped to zero. Projections represented by idempotent matrices can be oblique or orthogonal. An oblique projection occurs when \ker(T) is not the orthogonal complement of \operatorname{im}(T), projecting vectors onto the image in a direction parallel to the kernel without preserving angles. In contrast, the projection is orthogonal if A is symmetric (A = A^T), ensuring that \ker(T) is the orthogonal complement of \operatorname{im}(T) with respect to the standard inner product, and satisfying \langle Tx, y \rangle = \langle x, Ty \rangle for all y \in \operatorname{im}(T). Orthogonal projections minimize the Euclidean distance to the subspace, a property central to . Any linear projection on a finite-dimensional vector space can be represented by an idempotent matrix in a suitable basis. Specifically, choosing a basis for V that consists of a basis for \operatorname{im}(T) followed by a basis for \ker(T) yields a block-diagonal matrix form for A, with the identity matrix on the block corresponding to \operatorname{im}(T) and the zero matrix on the block for \ker(T), confirming its idempotence. This representation highlights how change of basis transforms general idempotent matrices into this canonical form, underscoring their role in decomposing spaces. The composition of projections corresponds to the product of their matrices, which is idempotent under appropriate conditions on shared images or kernels. For instance, if two projections P and Q share the same image, their product PQ projects onto that common image along a direction combining their kernels, remaining idempotent if the kernels align such that (PQ)^2 = PQ. Similarly, if they share the same kernel (implying the same image by the rank-nullity theorem), the product equals each projection. In general, such compositions allow chaining projections while preserving the idempotent structure when the ranges and null spaces intersect suitably. In higher dimensions, idempotent n \times n matrices generalize projections to arbitrary subspaces of \mathbb{R}^n or \mathbb{C}^n, where the rank of A equals the dimension of the projected subspace \operatorname{im}(A). This extends the intuitive 2D geometric interpretation—projecting onto a line along a parallel direction—to projections onto r-dimensional subspaces along complementary (n-r)-dimensional directions, facilitating analysis in multivariable settings like coordinate transformations or subspace decompositions. Idempotent matrices, satisfying A^2 = A, differ from involutory matrices, which satisfy A^2 = I, where I is the identity matrix. The intersection of these classes consists solely of the identity matrix, as any matrix fulfilling both conditions must equal I. Idempotent matrices are diagonalizable over algebraically closed fields, with eigenvalues restricted to 0 and 1, in contrast to nilpotent matrices, which have all eigenvalues 0 and may possess Jordan blocks larger than size 1. A notable connection arises in decompositions where every nilpotent matrix over a field of characteristic zero can be written as the sum of an idempotent matrix and a nilpotent matrix of nilpotency index at most one greater than the original. When an idempotent matrix is symmetric, it represents an orthogonal projection onto its column space, and its eigenvalues of 0 and 1 ensure it is positive semidefinite. This links symmetric idempotents directly to the class of , as the quadratic form x^T A x \geq 0 holds for all x, with equality when x lies in the kernel of A. In the context of , doubly idempotents—those with nonnegative entries where rows and columns sum to 1—are characterized by partitions of the dimension n. Specifically, up to permutation similarity, they take the block-diagonal form where each block corresponding to a part of size m in the partition is the m \times m matrix with all entries $1/m. These reduce to precisely when the partition consists of disjoint parts of length 1, yielding the , which corresponds to fixed points only. For indecomposable cases, the uniform matrix with entries $1/n exemplifies a non-permutation idempotent. Idempotents within the monoid of all n \times n matrices over a field generate subsemigroups that have been extensively studied in semigroup theory, particularly the free idempotent-generated semigroup over the biordered set of idempotents. These structures reveal maximal subgroups isomorphic to direct products of symmetric groups, highlighting the algebraic richness of idempotents in matrix monoids. While linear projections are precisely the idempotent matrices, in non-linear settings—such as projections onto manifolds or nonlinear maps in optimization—not all such projections satisfy the idempotence condition f \circ f = f, distinguishing them from their linear counterparts.

Applications

In Linear Algebra and Geometry

Idempotent matrices are essential in solving linear systems of equations, particularly by facilitating projections onto the column space of the coefficient matrix. Consider a consistent system Ax = b, where A is an m \times n matrix with full column rank. The orthogonal projection matrix onto the column space of A is given by P = A (A^T A)^{-1} A^T, which satisfies P^2 = P and is symmetric. Applying P to b yields Pb, the unique point in the column space closest to b; for consistent systems, Pb = b, and the solution x = (A^T A)^{-1} A^T b satisfies the equation exactly. This matrix form arises from the normal equations and provides a geometric interpretation of solvability, where the rank of the augmented matrix equals the rank of A. In geometric transformations, idempotent matrices model projections that stabilize invariant subspaces under repeated application. Such a matrix P maps vectors in its range (the target subspace) to themselves while sending vectors in the kernel to zero, ensuring P^2 = P implies idempotence equates to projection along a complementary direction. For instance, in computer graphics, parallel projection matrices onto a plane—used for shadow rendering under directional lights—exhibit this property, flattening scenes onto surfaces without altering points already on the plane, thus enabling efficient computation of stable transformations in rendering pipelines. This idempotence guarantees that multiple projections converge immediately, preserving geometric structure in affine spaces. Idempotent matrices also enable the construction of oblique coordinate systems through non-orthogonal projections. To define an oblique basis for a subspace U along a complementary direction W, the projection matrix P onto U parallel to W transforms standard Cartesian coordinates into slanted ones, where basis vectors are non-perpendicular. This is achieved by solving for P such that its range is U and kernel is W, with P^2 = P ensuring the coordinate change is well-defined and invertible on U. Such transformations are valuable in geometry for representing skewed lattices or , where the matrix facilitates vector decompositions without orthogonal constraints. A key theorem asserts that every subspace of a finite-dimensional vector space admits a projection onto it, represented by an idempotent matrix. Specifically, for any direct sum decomposition V = U \oplus W, there exists a unique linear map P: V \to V with range U, kernel W, and P^2 = P, whose matrix in a suitable basis is idempotent. This result underpins subspace theory, allowing arbitrary projections (orthogonal or oblique) and extending to infinite-dimensional settings like . Historically, John von Neumann utilized idempotent operators in his foundational work on , employing self-adjoint idempotents as projections to decompose spaces in spectral theory, independent of physical interpretations.

In Statistics and Optimization

In linear regression, the hat matrix H = X(X^T X)^{-1} X^T, where X is the design matrix, is idempotent and projects the response vector onto the column space of X. This idempotence, H^2 = H, ensures that the fitted values \hat{y} = H y remain unchanged upon repeated projection, reflecting the orthogonal projection property in the least squares solution. The trace of H equals the rank of X, typically the number of predictors p under full column rank, which determines the model's degrees of freedom. The diagonal elements of H, called leverage values h_{ii}, quantify each observation's influence on the regression coefficients and fitted values, satisfying $0 \leq h_{ii} \leq 1 due to the symmetry and idempotence of H. High leverage (h_{ii} > 2p/n) indicates potential outliers or influential points affecting model stability. Since H is symmetric, it follows that H H^T = H, reinforcing its role in variance-covariance computations for residuals, where I - H is also idempotent. In analysis of variance (ANOVA) and experimental design, idempotent matrices facilitate hypothesis testing by decomposing the into orthogonal components via projection matrices associated with the design. For instance, in balanced designs, these matrices correspond to effects like treatments or blocks, with their traces yielding the for each term in the ANOVA table. In optimization, idempotent projection matrices enforce constraints in by mapping solutions onto feasible subspaces, such as linear equality or inequality sets. This approach simplifies solving or support vector machines by iteratively projecting onto constraint manifolds. For example, in mixed-projection conic programs, idempotent matrices model low-rank constraints analogously to binary variables in , enabling semidefinite relaxations for quadratic objectives. A example arises in with n observations (x_i, y_i), where X is the n \times 2 matrix with first column of ones and second column x_i. The hat matrix elements are h_{ij} = \frac{1}{n} + \frac{(x_i - \bar{x})(x_j - \bar{x})}{S_{xx}}, with S_{xx} = \sum (x_i - \bar{x})^2, confirming and yielding fitted values \hat{y}_i = h_{i1} y_1 + \cdots + h_{in} y_n. h_{ii} = \frac{1}{n} + \frac{(x_i - \bar{x})^2}{S_{xx}} highlights points far from the mean \bar{x} as influential.

In Other Fields

In numerical methods for solving linear systems, such as the Kaczmarz iterative solver, successive orthogonal onto hyperplanes defined by the equations are used, where each projection step employs an idempotent matrix to minimize the while ensuring to the least-squares . The method's efficiency stems from the nonexpansive nature of these idempotent , particularly in inconsistent systems. In , idempotent matrices represent projection operators onto subspaces, such as eigenspaces of observables. These operators satisfy P^2 = P and are used to project states onto measurement outcomes, fundamental in the mathematical formalism of as developed by .

References

  1. [1]
    Idempotent Matrix -- from Wolfram MathWorld
    A periodic matrix with period 1, so that A^2=A . See also. Idempotent, Nilpotent Matrix, Periodic Matrix. Explore with Wolfram|Alpha. WolframAlpha.
  2. [2]
    [PDF] idempotent.pdf
    Some Facts about Idempotent Matrices. 1. A square matrix A is idempotent if and only if A2 = A. 2. If A is idempotent then trace(A) = rank(A).
  3. [3]
    [PDF] Eigenvalues and Eigenvectors
    Definition: A symmetric matrix A is idempotent if A2 = AA = A. A matrix A is idempotent if and only if all its eigenvalues are either 0 or 1. The number of ...
  4. [4]
    [PDF] Some basic properties of idempotent matrices
    Jan 2, 2025 · The only nonsingular idempotent matrix is identity matrix (In ). Every idempotent matrix (except In ) is singular but a singular matrix may.
  5. [5]
    10 Idempotent Matrices
    This chapter is devoted to a very important class of matrices called idempotent matrices. It provides coverage of some basic properties of idempotent ...
  6. [6]
    [PDF] MATRIX ALGEBRA - NYU Stern
    Jan 2, 2025 · DEFINITION A.1 Idempotent Matrix. An idempotent matrix, M, is one that is equal to its square, that is, M2 = MM = M. If M is a symmetric ...
  7. [7]
    [PDF] A Note on Idempotent Matrices: The Poset Structure and The ... - arXiv
    Oct 10, 2025 · Abstract. Idempotent elements play a fundamental role in ring theory, as they encode significant information about the underlying algebraic ...
  8. [8]
    Idempotent - Etymology, Origin & Meaning
    Origin and history of idempotent. idempotent(n.) in algebra, quantity which multiplied by itself gives itself, 1870, from Latin idem "the same, identical ...
  9. [9]
    Earliest Known Uses of Some of the Words of Mathematics (I)
    The OED2 shows a 1937 citation with a simplified definition of idempotent in Modern Higher Algebra (1938) iii 88 by A. A. Albert: "A matrix E is called ...
  10. [10]
    [PDF] The minimal polynomial and some applications - Keith Conrad
    The minimal polynomial is a special polynomial that indicates when a linear operator is diagonalizable, and it is used to detect diagonalizability.
  11. [11]
    [PDF] Useful results Eigenvalues (characteristic values) and eigenvectors ...
    The k × k matrix Λ is diagonal with diagonal elements the eigenvalues. Theorem: If A is idempotent k × k matrix then the eigenvalues are 0 or 1. Proof: A is ...
  12. [12]
    [PDF] 1. Let A be idempotent, that is, A2 = A. Prove that A is diagonalizable.
    Let A be idempotent, that is, A2 = A. Prove that A is diagonalizable ... Find an upper triangular and Jordan canonical form of the matrix A in Problem 2.
  13. [13]
    [PDF] arXiv:2003.05317v3 [math.RA] 11 Apr 2023
    Apr 11, 2023 · (c) Suppose the characteristic of F is not 2. The sum of two idempotents is an idempotent if and only if they are disjoint. (d) Every non- ...
  14. [14]
    [PDF] 1. Definition. (Square matrix.)
    Examples on idempotent matrices. (a) The (n × n)-zero matrix is idempotent. Reason: Note that On×n. 2. = On×n. (b) The (n × n)-identity matrix is idempotent.<|control11|><|separator|>
  15. [15]
    [PDF] The Matrix Cookbook
    Nov 15, 2012 · A matrix A is idempotent if. AA = A. Idempotent matrices A and B ... Matrix Analysis. Cambridge. University Press, 1985. [16] Mardia K. V. ...
  16. [16]
    [PDF] CHAPTER 2. ANALYSIS AND LINEAR ALGEBRA IN A NUTSHELL
    Examples of idempotent matrices are 0, I, and for any n×r matrix X of rank r, X(XNX)-1XN. Idempotent matrices are intimately related to projections, discussed ...
  17. [17]
    [PDF] 2 Review of Linear Algebra and Matrices
    2.2.8 Idempotent and Projection Matrices. 2.51 Definition: A matrix P is idempotent if P2 = P. A symmetric idempotent matrix is called a projection matrix.
  18. [18]
    [PDF] oblique projections, pseudoinverses, and standard-form ...
    We start with a brief discussion of projections, and we recall that the only requirement of a matrix E to be a projector is that it is idempotent, i.e., E2 = E.
  19. [19]
    (PDF) Some basic properties of idempotent matrices - ResearchGate
    Sep 3, 2025 · 1) A matrix A is said to be idempotent if A. 2. =A . · 2) A two matrices A and B are said to be zero commut if AB = BA =0 . · 3) A null space of a ...
  20. [20]
    [PDF] Matrices
    (ii). • If A is an idempotent matrix and A + B = I, then B is an idempotent and AB = BA= 0. • Diagonal (1, 1, 1, …,1) is an idempotent matrix. • If I1, I2 and ...
  21. [21]
  22. [22]
    [PDF] 1 Basic definitions
    If α is an idempotent then so is 1 − α, the complementary idempotent. Its kernel is the image of α and vice versa; A is the direct sum of αA and (1−α)A.
  23. [23]
    Projection matrix - StatLect
    It turns out that idempotent matrices and projection matrices are the same thing! Proposition A matrix is idempotent if and only if it is a projection matrix.Projections · Matrix of the projection operator · How to derive the projection...
  24. [24]
    [PDF] Compressed Sensing Linear Algebra Review
    A projection matrix P is one which satisfies P2 = P (P is idempotent). If P = P∗, then P is called an orthogonal projection. Projection matrices project vectors ...
  25. [25]
  26. [26]
    [PDF] On the Involutive Matrices of the kth Degree
    In mathematics, an involutory matrix is a square matrix that is its own inverse. ... Each idempotent matrix is a periodic matrix with period 1 [14]. Lemma 3. Let ...
  27. [27]
    Nonderogatory matrices as sums of idempotent and nilpotent matrices
    Nov 15, 2020 · Consequently, every nilpotent matrix over F is a sum of an idempotent matrix and a nilpotent matrix of nilpotence index at most p + 1 . Previous ...
  28. [28]
    [PDF] Some linear algebra
    Theorem 5. A symmetric matrix A is positive semidefinite if and only if all of its eigenvalues are ≥ 0. A is positive definite if and only if all.
  29. [29]
    The semigroup of doubly-stochastic matrices
    Thus for doubly-stochastic matrices the notions of reducibility and decomposability coincide. (2.2) Ife is an idempotent indecomposable doubly-stochastic nxn ...
  30. [30]
    Maximal subgroups of free idempotent generated semigroups over ...
    Jul 24, 2013 · The main result of [3] shows that the rank 1 component of the free idempotent generated semigroup of the biordered set of a full matrix monoid.
  31. [31]
    Are idempotent matrices always a projection matrix?
    Jan 21, 2018 · If M∈Matn×n(F) is an idempotent matrix, then you can show that Fn=Im(M)⊕ker(M), and M is the projection onto its image along the kernel.Missing: "linear
  32. [32]
    Oblique Projection - an overview | ScienceDirect Topics
    To use an oblique projection in the OpenGL pipeline, the projection matrix P is computed and combined with the matrix computed from the glOrtho command. Matrix ...
  33. [33]
    Hat Matrix - an overview | ScienceDirect Topics
    The hat matrix is defined as H = X(X^TX)^{-1}X^T, a symmetric and idempotent matrix that transforms the observed response variable into fitted values in linear ...
  34. [34]
    [PDF] Lecture 13: Simple Linear Regression in Matrix Format
    Oct 14, 2015 · It is a bit more convoluted to prove that any idempotent matrix is the projection matrix for some subspace, but that's also true. We will see ...<|control11|><|separator|>
  35. [35]
    [PDF] The Hat Matrix and Regression Diagnostics
    Oct 3, 2006 · 4.3 (I-H) is idempotent. Idempotent means that a matrix multiplied by itself is equal to itself! If X is idempo- tent, then X · X = X.
  36. [36]
    [PDF] Multiple Regression - UBC Computer Science
    Mar 4, 2010 · Hat Matrix Properties. 1. the hat matrix is symmetric. 2. the hat matrix is idempotent, i.e. HH = H ... ▻ Simple linear regression in matrix form.
  37. [37]
    [PDF] Notes on Analysis of Variance: Old School - Istics.Net
    It is interesting that any symmetric idempotent matrix M is a projection matrix for some ... where the X's are fixed design matrices, A is a vector (a × 1) of ...
  38. [38]
    [PDF] A New Paradigm for Modeling Rank Constraints - Optimization Online
    Indeed, similarities between the two are evident: binary variables z are idempotent scalars which solve z2 = z, while projection matrices Y are idempotent ...
  39. [39]
    Mixed-Projection Conic Optimization: A New Paradigm for Modeling ...
    Dec 20, 2021 · Proposition 1 suggests that projection matrices are to rank constraints what binary variables are to cardinality constraints. Indeed, ...
  40. [40]
    [PDF] Lecture 13: Simple Linear Regression in Matrix Format
    Linear algebra is a pre-requisite for this class; I strongly urge you to go ... A symmtric, idempotent matrix is a projection matrix. This means that H ...
  41. [41]
    Hat matrix with simple linear regression - Math Stack Exchange
    Oct 9, 2016 · I assume you mean Sxx=∑x2i−nˉx2. There is no ˉx2 anywhere, so I am not sure where I would use that formula. Thanks!For the simple linear regression model, show that the elements of ...Hat Matrix Identities in Regression - matrices - Math Stack ExchangeMore results from math.stackexchange.comMissing: explicit | Show results with:explicit
  42. [42]
    [PDF] Observability - MIT OpenCourseWare
    Observability is a notion that plays a major role in filtering and reconstruction of states from inputs and outputs. Together with reachability, observability ...
  43. [43]
    Projection Matrix - Yi's Knowledge Base
    Oct 23, 2020 · Suppose A is a square matrix. We call it an idempotent matrix if A 2 = A . Some properties of idempotent matrices are: A ′ is also idempotent.
  44. [44]
    [PDF] Graphs with few eigenvalues: An interplay between combinatorics ...
    So G is a disjoint union of cliques. Since the only two vertex degrees that can occur are v − µ − 1 and 0, G is a disjoint union of (v − µ)-cliques and isolated.
  45. [45]
    [quant-ph/0401098] Lorentz Group in Ray Optics - arXiv
    Jan 18, 2004 · The Lorentz group is also the basic underlying language for classical ray optics, including polarization optics, interferometers, the ...
  46. [46]
    [PDF] Leontief Input-Output Models - UMD MATH
    In order to produce its Product each Sector requires input which must come from possibly all of the Sectors, including itself.
  47. [47]
    Constraining by a family of strictly nonexpansive idempotent ...
    A convergence result has been previously proved for a constrained version of the Kaczmarz projection algorithm with a single strictly nonexpansive idempotent ...<|separator|>