Fact-checked by Grok 2 weeks ago

Main diagonal

In linear algebra, the main diagonal of a square refers to the set of entries where the row the column , running from the top-left corner to the bottom-right corner. This diagonal is fundamental to structure and operations, distinguishing it from secondary or anti-diagonals. The main diagonal plays a central role in defining several important classes and properties. A is a square with all off-diagonal entries equal to zero, leaving only the main diagonal elements potentially nonzero. The of a , denoted \operatorname{tr}(A), is the sum of its main diagonal elements and is an invariant under similarity transformations, making it useful in applications like stability analysis and . In the context of , when a is diagonalizable, it can be expressed as A = PDP^{-1} where D is a whose main diagonal entries are the eigenvalues of A. Beyond basic definitions, the main diagonal (also known as the principal diagonal) appears in advanced topics such as the , where the singular values are placed on the diagonal, and optimization problems, where aligning data or parameters along this diagonal simplifies computations. Its elements also contribute to determinants and characteristic polynomials, influencing matrix invertibility and dynamical systems behavior.

Fundamentals

Definition

In linear algebra, a square matrix is a matrix with an equal number of rows and columns, denoted as an n \times n array where n is a positive integer. The main diagonal of a square matrix A = (a_{ij}), also known as the principal diagonal, consists of the entries a_{ii} where the row index i equals the column index j, and i ranges from 1 to n. These n elements form a that runs from the top-left corner to the bottom-right corner of . For example, consider the $3 \times 3 matrix \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}. The main diagonal comprises the elements 1, 5, and 9.

Notation and Representation

The main diagonal of an n \times n matrix A = (a_{ij}) is typically extracted and represented as a \mathbf{d} = \operatorname{diag}(A), where d_k = a_{kk} for k = 1, \dots, n. Conversely, a is denoted D = \operatorname{diag}(a_1, \dots, a_n), which constructs an n \times n matrix with a_i on the main diagonal and zeros elsewhere; this notation leverages the \delta_{ij} such that d_{ij} = a_i \delta_{ij}. In computational contexts, the main diagonal is handled via specialized functions in numerical libraries. For instance, Python's package provides numpy.diag(v, k=0), which extracts the k-th diagonal from a array (with k=0 yielding the main diagonal as a 1D array copy) or constructs a diagonal array from a 1D input placed on the specified diagonal. Visually, the main diagonal appears as a straight line of entries from the upper-left corner (a_{11}) to the lower-right corner (a_{nn}) in the standard row-column layout of a square matrix, with off-diagonal elements positioned symmetrically above and below it. In banded matrices, a sparse structure confines non-zero entries to the main diagonal and a finite number of adjacent supra- and sub-diagonals, creating a "band" of width determined by the bandwidth parameter, which enhances storage and computational efficiency for systems with localized interactions. The term "diagonal" originates from the Latin diagonalis (meaning "slanting" or "oblique," derived from dia- "through" and gonia "angle"), and its application to matrices emerged in 19th-century algebraic developments, with foundational work by in his 1858 memoir on the theory of matrices.

Properties in Linear Algebra

Trace and Sum of Elements

The trace of a square A = (a_{ij}) of size n \times n is defined as the of its main diagonal elements, denoted \operatorname{[tr](/page/TR)}(A) = \sum_{i=1}^n a_{ii}. This scalar value captures the aggregate of the entries along the primary diagonal, providing a fundamental invariant in ./03:_Operations_on_Matrices/3.02:_The_Matrix_Trace) The trace exhibits linearity as a functional on the space of square matrices: for matrices A and B of the same size and scalar c, \operatorname{tr}(A + B) = \operatorname{tr}(A) + \operatorname{tr}(B) and \operatorname{tr}(cA) = c \operatorname{tr}(A). Additionally, it satisfies the cyclic property, ensuring \operatorname{tr}(AB) = \operatorname{tr}(BA) for any compatible square matrices A and B, which extends to invariance under cyclic permutations such as \operatorname{tr}(ABC) = \operatorname{tr}(BCA). This property arises from rearranging the indices in the double sum representation of \operatorname{tr}(AB). A key feature of the trace is its invariance under similarity transformations: if P is an , then \operatorname{tr}(P^{-1}AP) = \operatorname{tr}(A). This holds because similarity preserves the diagonal structure in a basis change, leaving the sum of diagonal elements unchanged. For instance, consider the $2 \times 2 A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, where \operatorname{tr}(A) = 1 + 4 = 5. The trace stands out as the unique linear functional on the space of n \times n matrices that remains invariant under conjugation by the general linear group, distinguishing it from other possible traces or functionals. This characterization underscores its role as a canonical tool for studying matrix equivalences without delving into spectral details.

Role in Determinants

The Leibniz formula expresses the determinant of an n \times n matrix A = (a_{ij}) as \det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}, where the sum is over all permutations \sigma in the S_n, and \operatorname{sgn}(\sigma) is the of the permutation. In this expansion, the term corresponding to the permutation \sigma = \operatorname{id} (where \sigma(i) = i for all i) is \prod_{i=1}^n a_{ii}, which is the product of the main diagonal elements and carries a positive sign since the identity is even. This diagonal product represents one contribution to the , with all other terms involving off-diagonal elements adjusted by permutation signs. For a D = \operatorname{diag}(d_1, d_2, \dots, d_n), where all off-diagonal entries are zero, the Leibniz formula simplifies because only the identity yields a nonzero product: all other permutations include at least one zero off-diagonal entry. Thus, \det(D) = \prod_{i=1}^n d_i, directly giving the product of the main diagonal elements. A related property holds for triangular matrices. For an upper U with zeros below the main diagonal, the Leibniz formula again has nonzero terms only for permutations that do not select entries below the diagonal; this restricts to permutations where \sigma(i) \geq i for all i, but ultimately only the identity permutation survives without zeros, yielding \det(U) = \prod_{i=1}^n u_{ii}. The same holds for lower triangular matrices, where \det(L) = \prod_{i=1}^n l_{ii}. For example, consider the upper \begin{pmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{pmatrix}; its determinant is $1 \cdot 4 \cdot 6 = 24. Cofactor expansion provides another way to compute determinants involving the main diagonal. The cofactor C_{ij} of entry a_{ij} is (-1)^{i+j} \det(M_{ij}), where M_{ij} is the submatrix obtained by deleting row i and column j. Expanding along the main diagonal means \det(A) = \sum_{i=1}^n a_{ii} C_{ii}, which recursively applies the definition to principal minors along the diagonal. This method highlights the main diagonal's role in structuring the computation, especially when the matrix has sparsity or structure near the diagonal.

Applications and Extensions

In Eigenvalue Problems

In eigenvalue problems, the main diagonal of a square matrix A = (a_{ij}) is fundamental to the p(\lambda) = \det(A - \lambda I), where I is the and the eigenvalues are the roots of p(\lambda) = 0. The matrix A - \lambda I has main diagonal entries a_{ii} - \lambda for i = 1, \dots, n, which directly enter the expansion and determine the coefficients, linking the diagonal structure to the spectral properties of A. For diagonalizable matrices, the eigenvalues reside exactly on the main diagonal of the canonical diagonal form. A matrix A is diagonalizable if there exists an P such that A = P D P^{-1}, where D is a with the eigenvalues of A as its main diagonal entries. This isolates the eigenvalues on the diagonal, facilitating of behavior and matrix powers, such as A^k = P D^k P^{-1}, where D^k has the k-th powers of the eigenvalues on its diagonal. The further underscores the main diagonal's role in bounding eigenvalues. For any A, every eigenvalue lies in the union of n closed disks in the , each centered at a main diagonal entry a_{ii} with radius r_i = \sum_{j \neq i} |a_{ij}|, the sum of the absolute values of the off-diagonal entries in the i-th row. This localization theorem, originally established for general complex matrices, provides a simple geometric constraint on the spectrum without requiring eigenvalue computation and is especially tight for matrices that are nearly diagonal. Perturbation theory highlights how alterations to the main diagonal affect eigenvalues, particularly in matrices close to diagonal form. For a , the eigenvalues coincide with the diagonal entries, and small perturbations \Delta to these entries yield first-order shifts in the eigenvalues approximately equal to the corresponding diagonal changes in \Delta, assuming the unperturbed eigenvalues are simple. In more general nearly diagonal cases, the sensitivity of eigenvalues to diagonal perturbations is analyzed via resolvent expansions, revealing that the main diagonal governs the leading-order spectral response under such changes. A concrete illustration is the $2 \times 2 diagonal matrix \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}, whose characteristic polynomial is (\lambda - 2)(\lambda - 3) = 0, yielding eigenvalues 2 and 3 exactly matching the main diagonal entries. This example extends to higher dimensions, where diagonal matrices directly exhibit their spectra on the main diagonal.

In Graph Theory and Combinatorics

In , the main diagonal of the of an undirected simple graph, which has no self-loops or multiple edges, consists entirely of zeros, reflecting the absence of edges from a to itself. Consequently, the of this , defined as the sum of its diagonal entries, is zero. For example, the C_3, a with three vertices, has an \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}, where the main diagonal is all zeros, yielding a of zero. When self-loops are permitted in a , the entries on the main diagonal of the indicate the number of such loops at each , typically 0 or 1 for cases with at most one loop per vertex. In this context, the powers of the A^k have diagonal entries that count closed walks of length k starting and ending at each , so the of A^k gives the total number of closed walks of length k in the . In , permutation matrices provide a key interpretation of the main diagonal, as these matrices represent permutations of n elements with exactly one 1 in each row and column, and the rest zeros. The number of 1s on the main diagonal equals the number of fixed points in the corresponding , where a fixed point is an element mapped to itself. The of a permutation matrix thus directly counts these fixed points. Permutation matrices also model perfect matchings in the K_{n,n}, where the main diagonal corresponds to the permutation as one such matching; more generally, the permanent of a bipartite graph's biadjacency , which sums over all permutation-based terms including diagonal products, counts the total number of perfect matchings.

Antidiagonal

In linear algebra, the antidiagonal of an n \times n A = (a_{i,j}) consists of the entries a_{i, n+1-i} for i = 1, 2, \dots, n, forming a line of elements that runs from the top-right corner to the bottom-left corner. This contrasts with the main diagonal, which runs from the top-left to the bottom-right. For example, consider the $3 \times 3 matrix \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}. The antidiagonal elements are $3, $5, and $7$. The sum of the antidiagonal elements is given by \sum_{i=1}^n a_{i,n+1-i}, and the vector of these elements is often denoted as \operatorname{antidiag}(A). The antidiagonal of A corresponds to the main diagonal of the matrix obtained by reversing the order of the columns of A. In persymmetric matrices, which are symmetric with respect to over the antidiagonal (i.e., a_{i,j} = a_{n+1-j, n+1-i}), the antidiagonal plays a central role analogous to the main diagonal in symmetric matrices. Similarly, in centrosymmetric matrices, defined by A = J A J where J is the exchange matrix with ones on the antidiagonal, the main and antidiagonal exhibit symmetric properties with respect to the matrix center. Unlike the main diagonal, whose elements remain in place under matrix transposition (A^T), the antidiagonal is not preserved by standard transposition; achieving invariance requires combining transposition with row or column reversal.

Off-Diagonal Elements

In a square A = (a_{ij})_{n \times n}, the off-diagonal elements consist of all entries a_{ij} where i \neq j. These elements form the complement to the main diagonal, which includes only the entries a_{ii} for i = 1, \dots, n. The off-diagonal elements can be further partitioned into the strict upper triangular part, comprising entries where i < j, and the strict lower triangular part, comprising entries where i > j. This partitioning is fundamental in decompositions such as LU factorization, where the lower and upper components capture these respective off-diagonal structures. A key property of off-diagonal elements is their absence in diagonal matrices, where all a_{ij} = 0 for i \neq j, leaving only the main diagonal nonzero. This sparsity characteristic is quantified in measures like the matrix , defined as the maximum value of |i - j| over all nonzero off-diagonal entries a_{ij}; a small bandwidth indicates that nonzeros are confined close to the main diagonal, aiding efficient storage and computation in sparse linear algebra algorithms. The Frobenius norm of a A, denoted \|A\|_F, satisfies \|A\|_F^2 = \sum_{i=1}^n a_{ii}^2 + \sum_{i \neq j} a_{ij}^2, explicitly separating the contribution of the main diagonal squares from the of squares of all off-diagonal elements. This highlights the role of off-diagonals in overall matrix magnitude assessments, particularly in optimization problems involving matrix approximations. For illustration, consider the 3×3 \begin{pmatrix} 1 & 2 & 0 \\ 3 & 4 & 5 \\ 0 & 6 & 7 \end{pmatrix}. Its off-diagonal elements are 2, 3, 5, 0, 6, and 0, distributed across the upper and lower triangles. Large off-diagonal elements relative to the diagonal can contribute to ill-conditioning of the matrix, as measured by a high , making the system sensitive to perturbations in numerical solutions of linear equations. For instance, if the absolute values of off-diagonal entries in a row exceed the corresponding diagonal entry, the matrix may lack diagonal dominance and thus be prone to instability.

References

  1. [1]
    Main Diagonal
    The main diagonal of a matrix consists of those elements that lie on the diagonal that runs from top left to bottom right.Missing: definition | Show results with:definition
  2. [2]
    Differential Equations - Review : Matrices & Vectors
    Nov 16, 2022 · In a square matrix the diagonal that starts in the upper left and ends in the lower right is often called the main diagonal. The next two ...
  3. [3]
    [PDF] 18.03 Differential Equations, Notes and Exercises Ch. LS 2
    a + d = tr A (trace A) where the trace of a square matrix A is the sum of the elements on the main diagonal. Using this, the characteristic equation (30) ...
  4. [4]
    [PDF] Notes on Eigenvalues and Eigenvectors - UT Computer Science
    Oct 31, 2014 · The eigenvalues of a diagonal matrix equal the values on its diagonal. The eigenvalues of a triangular matrix equal the values on its diagonal.
  5. [5]
    [PDF] Matrix algebra - NYU Stern
    Jan 5, 2017 · a diagonal matrix is a square matrix whose only nonzero elements appear on the main diagonal, that is, moving from upper left to lower right.
  6. [6]
    [PDF] Matrix Diagonalization
    Definition 1. A square matrix D ∈ Rn×n where all entries off the main diagonal are zero is called a diagonal matrix. That is, D takes the form. D = diag (d1 ...
  7. [7]
    Matrix -- from Wolfram MathWorld
    A matrix is a concise and useful way of uniquely representing and working with linear transformations.<|control11|><|separator|>
  8. [8]
    Diagonal matrix - StatLect
    A diagonal matrix is a square matrix whose off-diagonal entries are all equal to zero. A diagonal matrix is at the same time: upper triangular;. lower ...
  9. [9]
  10. [10]
    Matrix Trace -- from Wolfram MathWorld
    The trace of an square matrix is defined to be. (1) i.e., the sum of the diagonal elements. The matrix trace is implemented in the Wolfram Language as Tr[list].
  11. [11]
    Diagonal Matrix -- from Wolfram MathWorld
    A diagonal matrix is a square matrix A of the form a_(ij)=c_idelta_(ij), where delta_(ij) is the Kronecker delta, c_i are constants, and i,j=1, 2, ..., n
  12. [12]
    numpy.diag — NumPy v2.3 Manual
    ### Summary of `numpy.diag` for Getting or Setting Main Diagonal
  13. [13]
    Band matrix - an overview | ScienceDirect Topics
    A band matrix is a sparse matrix whose nonzero elements occur only on the main diagonal and on zero or more diagonals on either side of the main diagonal.
  14. [14]
    27.1: Banded Matrices - Engineering LibreTexts
    May 9, 2022 · As the figure shows, the nonzero entries of a banded matrix is confined to within m b entries of the main diagonal. More precisely, ...
  15. [15]
    Arthur Cayley | British Mathematician & Algebra Pioneer | Britannica
    Oct 9, 2025 · Diagonal matrices have the special property that multiplication of them is commutative; that is, for two diagonal matrices A and B, AB = BA. The ...<|separator|>
  16. [16]
    [PDF] Meyer.pdf
    Mar 4, 2018 · This text presents linear algebra theory and applications for university students in math, engineering, or applied science at the postcalculus ...
  17. [17]
    4.2: Laplace Expansion and Leibniz Formula - Mathematics LibreTexts
    May 23, 2024 · The second way to generalize the determinant is called the Leibniz formula, or more descriptively, the big formula.
  18. [18]
    [PDF] 18.06.23: Determinants & Permutations - MIT
    Jun 18, 2023 · 18.06.23: Determinants & Permutations det(𝐴) = ∑. 𝜎∈𝛴𝑛 sgn(𝜎)(. 𝑛. ∏. 𝑖=1. 𝑎𝜎(𝑖),𝑖). There it is – the Leibniz formula for the determinant.
  19. [19]
    [PDF] DETERMINANTS 1. Introduction In these notes we discuss a simple ...
    Therefore, given a 2 × 2 matrix a b c d its determinent is det a b c d = ad − bc . Lieibniz's formula fro the determinant, although extremely powerful, is very ...
  20. [20]
    3.1: The Cofactor Expansion - Mathematics LibreTexts
    Jan 3, 2024 · A square matrix is called a lower triangular matrix if all entries above the main diagonal are zero (as in Example [exa:007875]). Similarly, an ...
  21. [21]
    Upper triangular determinant (video) - Khan Academy
    May 5, 2011 · This was the main diagonal right here. And when we took the determinants of the matrix, the determinant just ended up being the product of the entries along the ...
  22. [22]
    Lesson Explainer: Determinant of a Triangular Matrix | Nagwa
    The determinant of a triangular matrix is the product of the entries on the main diagonal. Triangular matrices are either upper or lower triangular.
  23. [23]
    Cofactor Expansions
    Let A be an n × n matrix with entries a ij . ... det ( A )= n M j = 1 a ij C ij = a i 1 C i 1 + a i 2 C i 2 + ··· + a in C in . This is called cofactor expansion ...
  24. [24]
    [PDF] Section 2.1: Determinants by Cofactor Expansion - UC Davis Math
    ... diagonal), then det(A) is the product of the entries on the main diagonal of the matrix; that is, det(A) = a11a22⋯ann. ! Section 2.2: Evaluating ...
  25. [25]
    Matrix Analysis - Cambridge University Press & Assessment
    Matrix Analysis. Matrix Analysis. Matrix Analysis. Search within full text. You ... PDF; Export citation. Select Chapter 1 - Eigenvalues, eigenvectors, and ...
  26. [26]
    Geršgorin and His Circles - SpringerLink
    Free delivery 14-day returnsTheGer? sgorin CircleTheorem, averywell-known resultin linear algebra today, stems from the paper of S. Ger? sgorin in 1931.Missing: original | Show results with:original
  27. [27]
    Adjacency Matrix -- from Wolfram MathWorld
    For a simple graph with no self-loops, the adjacency matrix must have 0s on the diagonal. For an undirected graph, the adjacency matrix is symmetric.Missing: zeros | Show results with:zeros
  28. [28]
    The Adjacency Matrix | An Introduction to Algebraic Graph Theory
    The trace of a matrix is the sum of its diagonal entries and will be denoted by : Since all the diagonal entries of an adjacency matrix are all zero we have . ...
  29. [29]
    [PDF] Math 4707: Introduction to Combinatorics and Graph Theory
    In linear algebra, the sum of the diagonal entries of a matrix is known as the trace. 1. Page 2. Another important definition from linear algebra are ...<|separator|>
  30. [30]
    Graph Theory
    Loops (also known as self-loops or self-edges) are edges (i,i) from a node ... A loop corresponds to a diagonal entry in the adjacency matrix of the graph.
  31. [31]
    a combinatorial trace method: counting closed walks to assay graph ...
    The sum of the kth powers of these eigenvalues equals the number of closed walks of length k in the graph, so counting the latter affords us a ...
  32. [32]
    [PDF] Permutations with unique fixed and reflected points - OEIS
    Notice that if P = [pi] is the permutation matrix corresponding to σ, i.e., if pij = 8jo(i), then each fixed point of a corresponds to a 1 on the main diagonal ...
  33. [33]
    Permutation representation inner product - MathOverflow
    Mar 5, 2010 · The trace of a permutation matrix is the number of fixed points of the corresponding permutation. This is a special case of the identity ...
  34. [34]
    [PDF] Lecture 4 1 The permanent of a matrix
    We let pm(G) be the number of perfect matchings of G. The permanent of a 0-1-matrix can be interpreted as the number of perfect match- ings in a bipartite graph ...
  35. [35]
    [PDF] arXiv:2304.13842v1 [math.RA] 26 Apr 2023
    Apr 26, 2023 · In particular, A is an antidiagonal matrix if and only if A = DE for some diagonal matrix D. We consider the following general form for complex ...
  36. [36]
    Name for diagonals of a matrix - linear algebra - Math Stack Exchange
    Nov 2, 2012 · The diagonal from the top left corner to the bottom right corner of a square matrix is called the main diagonal or leading diagonal. The other ...
  37. [37]
    Is there a symbol for the antidiagonal matrix that has 1 as every entry?
    Nov 14, 2015 · A row reversed identity matrix (In) can be denoted as Jn=(00…00100…01000…100⋮⋮⋮⋮⋮01…00010…000). See here for further infomation.Inverse of block anti-diagonal matrix - Math Stack ExchangeDeterminant of the anti-diagonal square matrix filled with 1's.More results from math.stackexchange.com
  38. [38]
    [PDF] PERSYMMETRIC MATRIX AND ITS APPLICATION ... - OJS UNPATTI
    A persymmetric matrix is a square matrix that is symmetric concerning its antidiagonal. This article discusses some characteristics of a persymmetric matrix ...
  39. [39]
    Classroom Note:Centrosymmetric Matrices | SIAM Review
    This note shows some useful eigenvalue and eigenvector properties of matrices with two symmetries, such as matrices which are symmetric and persymmetric. A ...<|control11|><|separator|>
  40. [40]
    linear algebra - Transpose a matrix then flip it over the anti-diagonal ...
    Apr 11, 2016 · A matrix operation where you transpose it and then flip it over its anti-diagonal. I find it very useful in electrical network analysis.Questions on transposing in the opposite diagonalDeterminant and eigenvalues of a matrix when transposed with ...More results from math.stackexchange.com
  41. [41]
    [PDF] Linear Algebra Review and Reference - CS229
    Sep 30, 2015 · A diagonal matrix is a matrix where all non-diagonal elements are 0. ... non-singular means non-invertible. But in fact, it means the.
  42. [42]
    [PDF] Introduction to Linear Algebra - University of Minnesota
    Jan 4, 2017 · A diagonal matrix is a square matrix that has zeros in the off-diagonals: ... If you input a square matrix, diag returns the diagonal elements.
  43. [43]
    [PDF] Matrix Algebra
    All other elements of the matrix are called the off diagonal elements. Our example matrix M has the diagonalha e ii.
  44. [44]
    [PDF] Reduced-Bandwidth Multithreaded Algorithms for Sparse Matrix ...
    Matrix bandwidth is defined as the maximum distance of any nonzero to the main diagonal. Similarly, block bandwidth is the maximum block-distance of a non-.
  45. [45]
    Vectors, Matrices and Norms - CS 357 - Course Websites
    The Frobenius norm is simply the sum of every element of the matrix squared, which is equivalent to applying the vector 2 -norm to the flattened matrix, ∥A∥F=√ ...
  46. [46]
    [PDF] CONTINUED Ill-conditioning of Matrices • There is no clear cut or pr
    • If the matrix is diagonally dominant, i.e. the absolute values of the diagonal terms the sum of the off-diagonal terms for each row, then the matrix is not ...