Fact-checked by Grok 2 weeks ago

Identity matrix

In linear algebra, the identity matrix is a square containing ones along its and zeros in all other positions, serving as the multiplicative for . Denoted by I_n where n specifies the matrix dimension, it satisfies I_n \mathbf{x} = \mathbf{x} for any compatible \mathbf{x}, effectively leaving vectors unchanged under linear transformation. For example, the 2×2 identity matrix is
$$ I_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$
and multiplying it by any 2×2 A yields A itself. A fundamental property of the identity matrix is its role as the unit element in the of square matrices: for any n \times n matrix A, A I_n = I_n A = A, ensuring it commutes with all such matrices and acts as a neutral operation in algebraic structures. This property extends to its use in defining matrix inverses, where a matrix A is invertible if there exists another matrix B such that A B = B A = I_n, highlighting the identity's centrality in solvability and decomposition of linear systems. Additionally, the identity matrix is symmetric, orthogonal (for real entries), and unitary (for complex entries), preserving norms and inner products in vector spaces. Beyond , the identity matrix represents the trivial linear transformation that maps every to itself, making it indispensable in applications such as solving differential equations, for no-op transformations, and numerical methods where it initializes or benchmarks matrix operations. Its simplicity belies its ubiquity, as it appears in eigenvalue decompositions (with eigenvalue 1 for all eigenvectors) and as a building block for and diagonal matrices.

Definition and Notation

Definition

In linear algebra, the identity matrix of order n, where n is a positive , is defined as the n \times n whose entries are all equal to 1 and all other entries are equal to 0. This structure ensures that the matrix has exactly n rows and n columns, with the diagonal running from the top-left to the bottom-right. For illustration, the identity matrix of order 2 takes the form \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, while the identity matrix of order 3 is \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}. These examples highlight the pattern: ones only on the principal diagonal and zeros off-diagonal. The entries of the identity matrix can be precisely expressed using the symbol \delta_{ij}, defined as \delta_{ij} = 1 if i = j and \delta_{ij} = 0 otherwise, for indices i, j = 1, 2, \dots, n. Thus, the (i,j)-th entry of the identity matrix is \delta_{ij}, which compactly captures its diagonal structure. The identity matrix must be , as non-square analogs do not exist in standard linear algebra, where the concept is tied to the dimensions required for multiplicative identity properties in square matrix multiplication.

Notation and Terminology

The identity matrix is most commonly denoted by the bold capital letter \mathbf{I} when the dimension is clear from context, or by \mathbf{I}_n to explicitly indicate an n \times n square matrix. Alternative notations include E or E_n, derived from the German term "Einheitsmatrix" for unit matrix, particularly in older European mathematical literature; the subscript n is consistently used to specify the matrix size across these variants. Standard terminology refers to it as the "identity matrix," reflecting its role as the multiplicative identity in matrix algebra, though it is also known as the "unit matrix," a term borrowed from the analogous "unit element" in and . It may additionally be described as a "principal diagonal matrix with ones," emphasizing its diagonal structure of all 1s and off-diagonal zeros. The notation and terminology remain unchanged when the identity matrix is defined over the complex numbers or other fields, with the real numbers serving as the default context unless specified otherwise.

Properties

Algebraic Properties

The identity matrix I_n acts as the multiplicative in the algebra of square over the real or complex numbers. For any m \times n matrix A, the product A I_n equals A, and similarly I_m A = A, where I_m is the m \times m identity matrix. This property follows from the definition of : the (i,j)-th entry of A I_n is \sum_{k=1}^n a_{ik} (I_n)_{kj} = \sum_{k=1}^n a_{ik} \delta_{kj} = a_{ij}, where \delta_{kj} is the (equal to 1 if k = j and 0 otherwise). The same reasoning applies to I_m A by symmetry in the multiplication rule. To illustrate, consider a $2 \times 2 A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} and I_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. The product is A I_2 = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} a \cdot 1 + b \cdot 0 & a \cdot 0 + b \cdot 1 \\ c \cdot 1 + d \cdot 0 & c \cdot 0 + d \cdot 1 \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} = A. A parallel computation shows I_2 A = A. The identity matrix is self-inverse, meaning I_n^{-1} = I_n, since I_n I_n = I_n. This directly stems from its role as the multiplicative . For square matrices A of n, the commutes with every : I_n A = A I_n = A. This commutativity holds because both products reduce to A via the multiplicative property. Additively, I_n + O_n = I_n, where O_n is the n \times n , but I_n is not the (which is O_n). The of I_n equals n, as it sums the diagonal entries of 1. The of I_n is 1, reflecting its unimodular nature in matrix groups.

Analytic and Structural Properties

The identity matrix I_n of order n has all eigenvalues equal to 1, with algebraic multiplicity n. Every nonzero vector in \mathbb{R}^n (or the appropriate ) serves as an eigenvector corresponding to this eigenvalue, since I_n \mathbf{v} = \mathbf{v} for any \mathbf{v}. In particular, the standard basis vectors e_i (with 1 in the i-th position and zeros elsewhere) form a set of n linearly eigenvectors. The characteristic polynomial of I_n is \det(I_n - \lambda I_n) = (1 - \lambda)^n. This of degree n reflects the repeated root at \lambda = 1, confirming the eigenvalue structure. As a diagonal matrix with 1's on the main diagonal, I_n is already in diagonal form, requiring no further . It is orthogonal, satisfying I_n^T I_n = I_n and I_n^T = I_n, which preserves the Euclidean norm under . For real entries, I_n is symmetric and positive definite, as the \mathbf{x}^T I_n \mathbf{x} = \|\mathbf{x}\|^2 > 0 for all nonzero \mathbf{x} \in \mathbb{R}^n. The rank of I_n is n, equal to its , indicating full column and row . The () of I_n is the trivial \{\mathbf{0}\}, since I_n \mathbf{x} = \mathbf{0} implies \mathbf{x} = \mathbf{0}./07:_Spectral_Theory/7.01:_Eigenvalues_and_Eigenvectors_of_a_Matrix) In power series expansions, the powers satisfy I_n^k = I_n for all integers k \geq 1, leading to under multiplication. The matrix is e^{t I_n} = e^t I_n, derived from the series \sum_{k=0}^\infty \frac{(t I_n)^k}{k!} = I_n \sum_{k=0}^\infty \frac{t^k}{k!} = e^t I_n. The identity matrix is unique in the algebra of n \times n matrices over a field, as it is the only matrix E satisfying A E = E A = A for every matrix A. This uniqueness underscores its role as the multiplicative .

Relations and Applications

Relations to Other Concepts

The identity matrix serves as a special case of a , where all diagonal entries are equal to 1 and all off-diagonal entries are 0, distinguishing it from general diagonal matrices that may have arbitrary nonzero values on the diagonal while maintaining zeros elsewhere. In contrast, a general diagonal matrix scales vectors along the coordinate axes by its diagonal elements, whereas the identity matrix leaves them unchanged. The identity matrix is also the permutation matrix corresponding to the identity , obtained by rearranging no rows or columns of itself, which sets it apart from other that reorder entries to represent nontrivial . matrices preserve the structure of linear transformations as bijections on basis vectors, but only the identity yields the identity matrix itself. In the context of tensor products, the Kronecker product of two identity matrices satisfies I_m \otimes I_n = I_{mn}, where I_k denotes the k \times k identity matrix, illustrating how the identity extends naturally to higher-dimensional structures without alteration. This property underscores the identity matrix's role as a multiplicative element in the of Kronecker products. The identity matrix is orthogonal, satisfying I^T I = I, and it has operator norm 1 with respect to the norm, as it preserves lengths without scaling or rotation. Unlike general orthogonal matrices, which represent such as rotations or reflections, the identity matrix corresponds to the trivial isometry that fixes every . In comparison to the , which acts as the in the of matrices under addition—satisfying A + 0 = A for any A—the identity matrix is the multiplicative identity under , where A I = I A = A for compatible square matrices A. These roles do not overlap, as the annihilates under multiplication while the identity preserves under both operations in their respective contexts. Abstractly, the identity matrix represents the in the of n \times n square matrices over a , equipped with as the operation, where every matrix has a unique right and left identity given by I_n. This monoidal structure highlights the identity matrix's foundational position in algebraic frameworks for linear transformations.

Applications in Mathematics and Beyond

In linear algebra, the identity matrix plays a crucial role in solving systems of linear equations, particularly when computing matrix via . To find the of an invertible A, one augments A with the identity matrix of the same dimension to form the matrix [A \mid I], then applies row operations to transform the left block into the identity matrix; the right block then becomes A^{-1}. This method leverages the identity's property as the neutral element under , ensuring the transformations preserve the solution. The identity matrix also appears in change-of-basis transformations, where it signifies no alteration in the . In the P^{-1} A P = B, setting P = I yields B = A, illustrating that the identity matrix leaves the matrix representation unchanged under the . This is fundamental in preserving invariants like eigenvalues across equivalent bases. In and software libraries, the identity matrix serves as a default or initializing structure for algorithms and data handling. For instance, the library in provides the numpy.eye(n) function to generate an n \times n identity matrix, which is commonly used to initialize transformation matrices or as a starting point in iterative solvers. Additionally, it acts as a for verifying matrix function algorithms, such as those computing exponentials or logarithms, by exploiting functional identities like f(I) = I for certain analytic functions f, allowing assessment of backward stability through residual bounds. In physics, the identity matrix represents the identity transformation, corresponding to scenarios with no external influences. In quantum mechanics, the identity operator \hat{I}, whose matrix representation is the identity matrix in any orthonormal basis, leaves quantum states unchanged: \hat{I} |\psi\rangle = |\psi\rangle for any state vector |\psi\rangle, serving as the resolution of the identity in spectral decompositions. In classical mechanics, the identity transformation arises in canonical formulations for free particles under no forces, where the Hamiltonian remains unaltered, mapping coordinates and momenta identically as in the absence of interactions. In statistics, the models the structure of uncorrelated random variables. For a multivariate with uncorrelated components, each scaled by a variance \sigma^2, the is \sigma^2 I, indicating zero covariances off the diagonal and equal variances on the diagonal; this simplifies computations in models like the multivariate normal, where it implies under Gaussian assumptions. In graph theory, the identity matrix represents the adjacency matrix of a graph consisting of isolated vertices each with a self-loop. Unlike the zero matrix, which corresponds to an empty graph with no edges or loops, the identity matrix encodes a structure where each vertex connects only to itself, facilitating analysis of walks that remain at the same vertex. This configuration is useful in studying graph spectra and connectivity properties.

References

  1. [1]
    Identity Matrix -- from Wolfram MathWorld
    An identity matrix is a diagonal matrix where (1) for all vectors X, and is also known as a unit matrix. It is denoted as I, E, or 1.
  2. [2]
    Identity matrix: intro to identity matrices (article) | Khan Academy
    The n × n ‍ identity matrix, denoted I n ‍ , is a matrix with n ‍ rows and n ‍ columns. The entries on the diagonal from the upper left to the bottom right are ...
  3. [3]
    The identity matrix and its properties - MathBootCamps
    The identity matrix is a square matrix that has 1's along the main diagonal and 0's for all other entries. This matrix is often written simply as I.
  4. [4]
    Identity matrix - StatLect
    An identity matrix is a square matrix whose diagonal entries are all equal to one and whose off-diagonal entries are all equal to zero.Definition · Products involving the identity...
  5. [5]
    [PDF] Properties of matrix operations
    Identity matrix: In is the n × n identity matrix; its diagonal elements are equal to 1 and its offdiagonal elements are equal to 0. • Zero matrix: we denote by ...
  6. [6]
    Identity Matrix: Definition, Properties, and Applications | StudyPug
    An identity matrix is a given square matrix of any order which contains on its main diagonal elements with value of one, while the rest of the matrix elements ...
  7. [7]
    5.4 - A Matrix Formulation of the Multiple Regression Model
    Definition of the identity matrix. The square n × n identity matrix, denoted In, is a matrix with 1's on the diagonal and 0's elsewhere. For example, the 2 ...
  8. [8]
    [PDF] 4. Matrices - FSU Math
    matrix multiplication. Another way to define the identity matrix is the square matrix. I = [aij] where aij = 0 if i 6= j and aii = 1. The n × n identity I ...
  9. [9]
    [PDF] MATH 662 Matrix Analysis Notes from Hoffman and Kunze 2/e ...
    n × n identity matrix. A succinct way to describe I is with the Kronecker delta. Iij = δij := 1, i = j. 0, i 6= j . In the land of abstract algebra Fn×n is ...<|control11|><|separator|>
  10. [10]
    [PDF] MATRICES - Texas State University
    identity matrix of order n and is denoted by I or In. │. │. │. ⌋. ⌉. │. │. │. ⌊. ⌈. = 100. 010. 001. 33 x. I. Note: An identity matrix must be square.
  11. [11]
    Identity Matrix - an overview | ScienceDirect Topics
    An identity matrix is a square matrix with 1s on the main diagonal and 0s elsewhere, and is a diagonal matrix where all diagonal elements are 1.
  12. [12]
    [PDF] Chapter 6 Eigenvalues and Eigenvectors
    This “characteristic polynomial” det(A − λI) involves only λ, not x. When A is n by n, equation (3) has degree n. Then A has n eigenvalues (repeats possible!)
  13. [13]
    The Characteristic Polynomial
    The characteristic polynomial of A is the function f ( λ ) given by f ( λ )= det ( A − λ I n ) . We will see below that the characteristic polynomial is in ...
  14. [14]
    [PDF] Math 224 Properties of Orthogonal Matrices - Kenyon College
    The identity matrix is orthogonal. 2. The matrix. 0 1. 1 0 is orthogonal. 3 ... If A is a 2 × 2 orthogonal matrix with determinant 1, then A is an orthogonal.
  15. [15]
    [PDF] Linear Algebra 2: Lecture 21
    Example 1.1. The identity matrix In is positive definite. This is because In is symmetric, and for all x ∈ Rn \{0}, we have that xT Inx = xT x = x·x > 0. (A + ...<|control11|><|separator|>
  16. [16]
    Rank of a Matrix - GeeksforGeeks
    Aug 22, 2025 · The rank of the identity matrix is equal to the order of the identity matrix. The rank of a zero matrix or a null matrix is zero. Related ...
  17. [17]
    [PDF] Linear Algebra Notes
    ... unique identity matrix. That is, if E ∈ Mn×n. (F) is such that AE = EA = A, then E = In. Proof: It is clear that for any A ∈ Mn×n. (F), AIn = InA = A. Now ...
  18. [18]
    [PDF] The Matrix Exponential = = I + A + A3 + ··· D2 + ··· ··· = 0 ··· 0 1
    Oct 3, 2019 · Let A be a complex square n × n matrix. (1) If 0 denotes the zero matrix, then e0 = I, the identity matrix. (2) AmeA = eA Am for all ...
  19. [19]
    Special Kinds of Matrices - Ximera - The Ohio State University
    The identity matrix is the diagonal matrix all of whose diagonal entries equal . The identity matrix is denoted by . This identity matrix is entered in ...<|control11|><|separator|>
  20. [20]
    Special Case: Diagonal Matrices - BOOKS
    Diagonal Matrix. A square matrix whose only nonzero entries lie on the main diagonal ... The simplest example of a diagonal matrix is the identity matrix.
  21. [21]
    [PDF] 5.1 Introduction
    A diagonal matrix with only ones on the diagonal is called “the” identity matrix. We use the word “the” because in a given size there is only one identity ...
  22. [22]
    [PDF] Permutation matrices
    A permutation matrix is a square matrix obtained from the same size identity matrix by a permutation of rows. Such a matrix is always row equivalent to an ...
  23. [23]
    Permutation matrices
    A permutation matrix has the rows of the identity matrix, I n in any order. ... The permutation matrix obtained by interchanging rows 3 and 2 is,.
  24. [24]
    CS 357 | Special Matrices
    identity matrix is denoted by In and has all entries equal to zero except ... diagonal matrix has all entries equal to zero except for the diagonal ...
  25. [25]
    [PDF] Eigenvalue Comparisons for Boundary Value Problems of the ...
    Im is the identity matrix of order m, and L is an ... interesting properties of the Kronecker product can be found in [11, page 28]. ... n ) ⊗ Im = (In ⊗ L)(Imn ...Missing: I_m I_n =<|control11|><|separator|>
  26. [26]
    [PDF] Programming Schemata for Tensor Products - Department of ...
    The tensor (or Kronecker) product, A ⊗ B, of A ... The tensor product satisfies the following basic properties, where In is the n × n identity matrix, indicated ...
  27. [27]
    [PDF] Math 202A HW6 - UCSD Math
    U2 = Im ⊗In = Imn. Thus we only have to show S1 ⊗S2 is positive semidefinite. As S1,S2 are both diagonal matrices with positive en- tries, S1⊗ S2 is also ...
  28. [28]
    [PDF] RES.18-011 (Fall 2021) Lecture 12: Orthogonal Matrices
    What are the orthogonal matrices in two dimensions? 41For example, the identity matrix is always orthogonal and has determinant 1, and the diagonal matrix with ...
  29. [29]
    [PDF] orthogonal matrices - UTK Math
    (1) A matrix is orthogonal exactly when its column vectors have length one, and are pairwise orthogonal; likewise for the column vectors. In short, the columns ...
  30. [30]
    5.2 Special Types of Matrices - Applied Discrete Structures
    That was the zero matrix, and found that it behaves in matrix algebra ... identity matrix. If the context is clear, we simply use . I . 🔗. In the set ...
  31. [31]
    [PDF] mat 260 linear algebra lecture 19
    The zero matrix, i.e. a matrix with all the entries 0, is often denoted by O. It is the additive identity for matrix addition. By that, we mean. A + O = A and.
  32. [32]
    [PDF] A primer on matrices - LSU Math
    Sep 3, 2003 · The zero matrix (of size m × n) is the matrix with all entries equal ... An identity matrix is another common matrix. It is always ...
  33. [33]
    monoids - MathStructures
    Example 1: ⟨M(V)n,⋅,In⟩ ⟨ M ( V ) n , ⋅ , I n ⟩ , the collection of n×n n × n matrices over a vector space V V , with matrix multiplication and identity matrix.
  34. [34]
    [PDF] Introduction to Category Theory∗ OPLSS 2023 - Computer Science
    The monoid identity in this case is the identity matrix I, with 1 on the diagonal and 0 on all other entries. Formally, the monoid is (Matn,n(Z), ·,I). Example ...
  35. [35]
    [PDF] 1 Monoids and groups
    trix multiplication (e = I = the identity matrix). 4) U = any set. P(U) := {the set of all subsets of U}. P(U) is a monoid with A · B := A ∪ B and e = ∅. 5 ...
  36. [36]
    [PDF] 2.2 Elimination Matrices and Inverse Matrices - MIT Mathematics
    We look for an “inverse matrix” A−1 of the same size, so that A−1 times A equals I. Whatever A does, A−1 undoes. Their product is the identity matrix—which does ...
  37. [37]
    [PDF] Gaussian Elimination and Matrix Inverse - Faculty
    Computation of the Matrix Inverse. We want to find the inverse of S ∈ Rn×n, that is we want to find a matrix X ∈ Rn×n such that SX = I. ▻ Let X:,j denote ...
  38. [38]
    Change of basis | Formula, examples, proofs - StatLect
    Discover how a change of basis affects coordinate vectors and the matrix of a linear operator. With detailed explanations, proofs and solved exercises.
  39. [39]
    [PDF] Change of Basis - Purdue Math
    Feb 16, 2007 · The matrix PB←B is the identity matrix for any basis. B for V . Problems. For Problems 1–13, determine the component vector of the given ...
  40. [40]
    5. Operators in quantum mechanics
    Identity operator: the unit (or identity) operator is represented by ^1 1 ^ and satisfies ^1|ψ⟩=|ψ⟩ 1 ^ | ψ ⟩ = | ψ ⟩ for all possible state vectors |ψ⟩ | ψ ⟩ .
  41. [41]
    [PDF] Chapter 4 Canonical Transformations, Hamilton-Jacobi Equations ...
    The simplest case of the 2nd (F2) transformation is just an identity transformation. ... classical mechanics problems by way of finding a transformation. There ...
  42. [42]
    4.1 - Comparing Distribution Types | STAT 505
    If the variables are uncorrelated then the variance-covariance matrix will be a diagonal matrix with variances of the individual variables appearing on the ...
  43. [43]
    [PDF] Topic 3 Chapter 5: Linear Regression in Matrix Form
    When variables are uncorrelated, that means their covariance is 0. The ... The variance-covariance matrix of uncorrelated variables will be a diagonal matrix, ...
  44. [44]
    Adjacency Matrix -- from Wolfram MathWorld
    The adjacency matrix, sometimes also called the connection matrix, of a simple labeled graph is a matrix with rows and columns labeled by graph vertices.
  45. [45]
    [PDF] Chapter 16 - Spectral Graph Theory - Computer Science
    Graphs that have multiple-edges or self-loops are often called multi- graphs. Many different matrices arise in the field of Spectral Graph Theory. In this ...<|control11|><|separator|>