Main diagonal
In linear algebra, the main diagonal of a square matrix refers to the set of entries where the row index equals the column index, running from the top-left corner to the bottom-right corner.[1] This diagonal is fundamental to matrix structure and operations, distinguishing it from secondary or anti-diagonals.[2] The main diagonal plays a central role in defining several important matrix classes and properties. A diagonal matrix is a square matrix with all off-diagonal entries equal to zero, leaving only the main diagonal elements potentially nonzero. The trace of a matrix, denoted \operatorname{tr}(A), is the sum of its main diagonal elements and is an invariant under similarity transformations, making it useful in applications like stability analysis and spectral theory.[3] In the context of eigenvalues and eigenvectors, when a matrix is diagonalizable, it can be expressed as A = PDP^{-1} where D is a diagonal matrix whose main diagonal entries are the eigenvalues of A.[4] Beyond basic definitions, the main diagonal (also known as the principal diagonal) appears in advanced topics such as the singular value decomposition, where the singular values are placed on the diagonal, and optimization problems, where aligning data or parameters along this diagonal simplifies computations.[5][6] Its elements also contribute to determinants and characteristic polynomials, influencing matrix invertibility and dynamical systems behavior.[7]Fundamentals
Definition
In linear algebra, a square matrix is a matrix with an equal number of rows and columns, denoted as an n \times n array where n is a positive integer.[8] The main diagonal of a square matrix A = (a_{ij}), also known as the principal diagonal, consists of the entries a_{ii} where the row index i equals the column index j, and i ranges from 1 to n.[9][10] These n elements form a sequence that runs from the top-left corner to the bottom-right corner of the matrix.[10] For example, consider the $3 \times 3 matrix \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}. The main diagonal comprises the elements 1, 5, and 9.Notation and Representation
The main diagonal of an n \times n matrix A = (a_{ij}) is typically extracted and represented as a vector \mathbf{d} = \operatorname{diag}(A), where d_k = a_{kk} for k = 1, \dots, n.[11] Conversely, a diagonal matrix is denoted D = \operatorname{diag}(a_1, \dots, a_n), which constructs an n \times n matrix with a_i on the main diagonal and zeros elsewhere; this notation leverages the Kronecker delta \delta_{ij} such that d_{ij} = a_i \delta_{ij}.[11] In computational contexts, the main diagonal is handled via specialized functions in numerical libraries. For instance, Python's NumPy package providesnumpy.diag(v, k=0), which extracts the k-th diagonal from a 2D array (with k=0 yielding the main diagonal as a 1D array copy) or constructs a 2D diagonal array from a 1D input vector placed on the specified diagonal.[12]
Visually, the main diagonal appears as a straight line of entries from the upper-left corner (a_{11}) to the lower-right corner (a_{nn}) in the standard row-column layout of a square matrix, with off-diagonal elements positioned symmetrically above and below it. In banded matrices, a sparse structure confines non-zero entries to the main diagonal and a finite number of adjacent supra- and sub-diagonals, creating a "band" of width determined by the bandwidth parameter, which enhances storage and computational efficiency for systems with localized interactions.[13][14]
The term "diagonal" originates from the Latin diagonalis (meaning "slanting" or "oblique," derived from Greek dia- "through" and gonia "angle"), and its application to matrices emerged in 19th-century algebraic developments, with foundational work by Arthur Cayley in his 1858 memoir on the theory of matrices.[15]