Fact-checked by Grok 2 weeks ago

Diagonal matrix

A diagonal matrix is a square whose entries are zero everywhere except possibly on the , which runs from the upper left to the lower right corner. These matrices are fundamental in linear algebra, as they represent the simplest form of that scales each basis independently without mixing coordinates. The properties of diagonal matrices make them particularly useful for computations and theoretical analysis. For instance, the product of two diagonal matrices is another diagonal matrix, where each diagonal entry is the product of the corresponding entries from the factors. Raising a diagonal matrix to a n results in a diagonal matrix with each entry raised to the nth , simplifying . If all diagonal entries are nonzero, the matrix is invertible, and its inverse is diagonal with reciprocal entries on the diagonal. The of a diagonal matrix is the product of its diagonal entries, and its eigenvalues are precisely these diagonal values, with the standard basis vectors serving as corresponding eigenvectors. Diagonal matrices play a central role in the of square matrices, where a matrix A is diagonalizable if it is similar to a diagonal matrix D via an P, such that A = PDP^{-1}, and the diagonal entries of D are the eigenvalues of A. This process is essential for solving systems of linear equations, matrix powers efficiently, and understanding spectral properties in applications ranging from to .

Definition and Construction

Formal Definition

A diagonal matrix is a square in which all entries not on the are equal to zero. More formally, for an n \times n D = (d_{ij}), D is diagonal if d_{ij} = 0 for all i \neq j, while the diagonal entries d_{ii} (for i = 1, 2, \dots, n) may take arbitrary values, typically denoted as scalars \lambda_i \in \mathbb{R} or \mathbb{C}. The general form of such a matrix is expressed as D = \operatorname{diag}(\lambda_1, \lambda_2, \dots, \lambda_n), where the \lambda_i are the diagonal elements, and all other positions are zero. This notation compactly represents the structure using the diagonal operator \operatorname{diag}. Diagonal matrices are defined exclusively for square matrices, as the notion of off-diagonal entries requires equal dimensions; rectangular matrices do not qualify as diagonal in this sense, although extensions to block-diagonal forms—where diagonal blocks are themselves square matrices—appear in advanced matrix decompositions. Common notations include boldface \mathbf{D} for the matrix.

Constructing from Vectors

A diagonal matrix can be constructed from a using the , which places the elements of the along the while setting all off-diagonal entries to zero. This provides a concise way to form such matrices in both theoretical and computational contexts. Formally, given an n-element column \mathbf{v} = (v_1, v_2, \dots, v_n)^T \in \mathbb{R}^n, the n \times n diagonal matrix D = \operatorname{diag}(\mathbf{v}) is defined by D_{ii} = v_i \quad \text{for } i = 1, \dots, n, and D_{ij} = 0 \quad \text{for } i \neq j. This notation ensures the resulting matrix is square and diagonal, with the vector's components determining the non-zero entries. For example, if \mathbf{v} = \begin{pmatrix} 1 \\ 2 \end{pmatrix}, then \operatorname{diag}(\mathbf{v}) = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}. Similarly, for \mathbf{v} = \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}, the result is \operatorname{diag}(\mathbf{v}) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix}. These constructions highlight the operator's role in generating simple yet fundamental matrices for operations. In pure mathematical settings, the is taken to have length n to yield an n \times n , focusing on the square case. Computational implementations, such as MATLAB's diag , follow this convention by producing an n \times n from an n-element . The also supports extracting the diagonal from a input. The of matrices was developed by in the 1850s.

Extracting the Diagonal

The diag operator, when applied to a , extracts its diagonal elements into a . For an n \times n A = (a_{ij}), the notation \operatorname{diag}(A) denotes the column consisting of the main diagonal entries, specifically \operatorname{diag}(A) = \begin{pmatrix} a_{11} \\ a_{22} \\ \vdots \\ a_{nn} \end{pmatrix}, disregarding all off-diagonal elements. This convention aligns with common practices in , where the output is a column , though some contexts may represent it as a row for compatibility with specific algorithms or notations. To illustrate, consider the $2 \times 2 A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}. Here, \operatorname{diag}(A) = \begin{pmatrix} 1 \\ 4 \end{pmatrix}, capturing only the elements a_{11} = 1 and a_{22} = 4. This extraction is particularly useful for isolating spectral information or simplifying computations involving the 's primary axis. A key property of the diag is its in the context of diagonal : if v = \operatorname{diag}(A), then \operatorname{diag}(\operatorname{diag}(v)) = v , recovering the original vector from the constructed diagonal . Computationally, the of A, defined as the of its diagonal elements, can be obtained directly as \operatorname{tr}(A) = \sum_{i=1}^n [\operatorname{diag}(A)]_i.

Special Cases

Scalar Matrices

A scalar matrix is a diagonal matrix in which all diagonal entries are equal to the same scalar value c, with off-diagonal entries being zero. This distinguishes scalar matrices from general diagonal matrices, where the diagonal entries may vary arbitrarily while still maintaining zeros off the diagonal. Formally, an n \times n scalar matrix S can be expressed as S = c I_n, where I_n is the n \times n and c is a scalar from the underlying field (typically real or numbers). In explicit matrix form, it appears as S = \begin{pmatrix} c & 0 & \cdots & 0 \\ 0 & c & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & c \end{pmatrix}. This structure ensures that scalar matrices represent uniform operations in linear transformations. For example, the scalar matrix $2I_2 is \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix}, which multiplies any in \mathbb{R}^2 by 2, preserving directions while doubling magnitudes. A key property of scalar matrices is that they commute with every of the same dimension, as multiplication by c I_n simply scales the other uniformly.

Identity and Zero Matrices

The identity matrix, denoted I_n or simply I for an n \times n , is a special case of a diagonal matrix where all diagonal entries are 1 and all off-diagonal entries are 0. It serves as the multiplicative identity in the algebra of square matrices of the same order, meaning that for any n \times n matrix A, the product I A = A I = A. This matrix can be expressed in diagonal form as I_n = \operatorname{diag}(1, 1, \dots, 1), with n ones on the . For example, the 2×2 identity matrix is \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. The , denoted O_n or simply O for an n \times n , is another fundamental diagonal matrix where all entries, including those on the diagonal, are 0. It acts as the for , such that for any n \times n matrix A, O + A = A + O = A. In diagonal notation, it is O_n = \operatorname{diag}(0, 0, \dots, 0), with n zeros on the . For instance, the 2×2 zero matrix is \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}. Both matrices relate to scalar multiples of the identity: the identity matrix is the scalar 1 times itself (I = 1 \cdot I), while the zero matrix is the scalar 0 times the identity (O = 0 \cdot I). These properties make them essential building blocks in linear algebra, such as in representing neutral transformations or in solving systems of equations.

Operations Involving Diagonal Matrices

Operations with Vectors

A diagonal matrix D multiplies a vector v by scaling each component of v independently by the corresponding diagonal entry of D. Specifically, if D = \operatorname{diag}(d_1, d_2, \dots, d_n), then the product is given by Dv = \begin{pmatrix} d_1 v_1 \\ d_2 v_2 \\ \vdots \\ d_n v_n \end{pmatrix}, where v = (v_1, v_2, \dots, v_n)^T. This operation is equivalent to component-wise multiplication between the diagonal entries and the vector components, making it computationally efficient and preserving the coordinate axes. For example, applying D = \operatorname{diag}(2, 3, 1) to the vector v = (1, 4, 5)^T yields Dv = (2, 12, 5)^T, demonstrating selective scaling. Additionally, diagonal matrices preserve the standard basis vectors up to scaling: for the i-th standard basis vector e_i, D e_i = d_i e_i, which highlights their role in basis-aligned transformations. In the context of inner products, the expression \langle D u, v \rangle for vectors u and v in the standard inner product reduces to \sum_{i=1}^n d_i u_i v_i, as it follows directly from the component-wise scaling in the multiplication D u. This weighted sum arises naturally from the diagonal structure and is a special case of the quadratic form when u = v. Geometrically, multiplication by a diagonal matrix represents a linear transformation that stretches or compresses the vector space along the coordinate axes, with each axis scaled by the factor d_i, without introducing rotation or shearing in the standard basis. This interpretation underscores the utility of diagonal matrices in describing anisotropic scaling in coordinate systems.

Operations with Matrices

Diagonal matrices interact with general matrices through standard arithmetic operations, with the diagonal structure simplifying certain computations.

Addition

The addition of a diagonal matrix D = \operatorname{diag}(d_{11}, d_{22}, \dots, d_{nn}) and a general n \times n A = [a_{ij}] yields a D + A where the diagonal entries are (D + A)_{ii} = d_{ii} + a_{ii} for each i, and the off-diagonal entries are simply those of A, i.e., (D + A)_{ij} = a_{ij} for i \neq j. This follows directly from the element-wise nature of , preserving the off-diagonal zeros of D while adjusting the diagonal. For example, if D = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} and A = \begin{pmatrix} 1 & 4 \\ 5 & 6 \end{pmatrix}, then D + A = \begin{pmatrix} 3 & 4 \\ 5 & 9 \end{pmatrix}.

Multiplication

Left multiplication of a general A by a diagonal D, denoted DA, scales the i-th row of A by the scalar d_{ii}, resulting in the formula (DA)_{ij} = d_{ii} a_{ij} for all i, j. This operation effectively multiplies each row vector of A by the corresponding diagonal element of D. Conversely, right AD scales the j-th column of A by d_{jj}, with (AD)_{ij} = a_{ij} d_{jj}. Using the previous example, DA = \begin{pmatrix} 2 & 8 \\ 15 & 18 \end{pmatrix} and AD = \begin{pmatrix} 2 & 12 \\ 10 & 18 \end{pmatrix}. These scaling properties arise because the diagonal structure confines non-zero contributions to the rows or columns being multiplied. An illustrative application of these multiplications is the similarity transformation D^{-1} A D, where D must be invertible (i.e., all d_{ii} \neq 0); this involves scaling the columns of A by the entries of D and the rows by the reciprocals in D^{-1}.

Commutativity

Diagonal matrices commute with each other under : if D = \operatorname{diag}(d_{11}, \dots, d_{nn}) and E = \operatorname{diag}(e_{11}, \dots, e_{nn}), then DE = ED = \operatorname{diag}(d_{11} e_{11}, \dots, d_{nn} e_{nn}). This holds because the product aligns the diagonal entries multiplicatively without off-diagonal interference. In general, however, a diagonal matrix does not commute with an arbitrary matrix unless the latter shares compatible structure, such as being diagonal itself. Powers of a diagonal matrix derive from these multiplication rules, yielding another diagonal matrix with each entry raised to the power.

Algebraic Properties

Addition and Multiplication

Diagonal matrices form a of the of all square matrices under the operations of and , as both operations preserve the diagonal structure. The addition of two n \times n diagonal matrices D = \operatorname{diag}(d_1, d_2, \dots, d_n) and E = \operatorname{diag}(e_1, e_2, \dots, e_n) results in another diagonal matrix D + E = \operatorname{diag}(d_1 + e_1, d_2 + e_2, \dots, d_n + e_n). This follows from the entry-wise definition of , where the (i,j)-th entry of the sum is (D + E)_{ij} = D_{ij} + E_{ij}; since both matrices have zeros off the diagonal, so does their sum, and the diagonal entries add component-wise as (D + E)_{ii} = d_i + e_i for i = 1, \dots, n. For example, consider D = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} and E = \begin{pmatrix} 1 & 0 \\ 0 & 4 \end{pmatrix}; their sum is D + E = \begin{pmatrix} 3 & 0 \\ 0 & 7 \end{pmatrix}, which remains diagonal. of diagonal matrices is also closed, yielding DE = \operatorname{diag}(d_1 e_1, d_2 e_2, \dots, d_n e_n). The (i,j)-th entry of the product is (DE)_{ij} = \sum_{k=1}^n D_{ik} E_{kj}; for i \neq j, this sum is zero because D_{ik} = 0 unless k = i and E_{kj} = 0 unless k = j, but i \neq j makes at least one factor zero. On the diagonal, (DE)_{ii} = d_i e_i. Using the previous example, DE = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 4 \end{pmatrix} = \begin{pmatrix} 2 \cdot 1 & 0 \\ 0 & 3 \cdot 4 \end{pmatrix} = \begin{pmatrix} 2 & 0 \\ 0 & 12 \end{pmatrix}, again diagonal. Unlike general , which is non-commutative, the product of two diagonal matrices commutes: DE = ED. This holds because both yield the same component-wise products on the diagonal, and off-diagonal entries are zero in either order.

Powers and Exponentials

A diagonal matrix raised to a positive power k results in another diagonal matrix where each diagonal entry is raised to the same power. Specifically, if D = \operatorname{diag}(d_1, d_2, \dots, d_n), then D^k = \operatorname{diag}(d_1^k, d_2^k, \dots, d_n^k). This follows from the fact that powers of diagonal matrices preserve the diagonal structure, as off-diagonal entries remain zero under . For example, if D = \operatorname{diag}(1, 2), then D^2 = \operatorname{diag}(1^2, 2^2) = \operatorname{diag}(1, 4). This property extends recursively through , where computing higher powers involves iteratively multiplying the diagonal entries component-wise. For negative powers, D^{-k} = (D^{-1})^k, and since the of a diagonal matrix is \operatorname{diag}(1/d_1, 1/d_2, \dots, 1/d_n) (assuming all d_i \neq 0), it yields \operatorname{diag}(d_1^{-k}, d_2^{-k}, \dots, d_n^{-k}). This inversion-based approach simplifies the computation compared to general matrices. The matrix exponential of a diagonal matrix is similarly straightforward. Defined via the Taylor series \exp(D) = \sum_{m=0}^\infty \frac{D^m}{m!}, each term D^m / m! is diagonal with entries d_i^m / m!, so the sum converges component-wise to \exp(D) = \operatorname{diag}(\exp(d_1), \exp(d_2), \dots, \exp(d_n)). For the time-dependent case, \exp(D t) = \operatorname{diag}(\exp(d_1 t), \exp(d_2 t), \dots, \exp(d_n t)). An illustrative example is \exp(\operatorname{diag}(0, \ln 2)) = \operatorname{diag}(\exp(0), \exp(\ln 2)) = \operatorname{diag}(1, 2). These operations highlight the computational advantages of diagonal matrices, as powers and exponentials reduce to scalar operations on the diagonal entries, avoiding the full matrix multiplications required for non-diagonal forms and enabling efficient numerical implementations in linear algebra software. This simplification is particularly valuable in solving systems of ordinary differential equations, where the exponential form directly provides solutions in diagonalized bases.

Spectral and Structural Properties

A diagonal matrix D with diagonal entries d_{11}, d_{22}, \dots, d_{nn} has eigenvalues exactly equal to these diagonal entries \lambda_i = d_{ii} for i = 1, 2, \dots, n. This follows directly from the definition of eigenvalues, as the action of D on vectors scales components independently along the coordinate axes. The corresponding eigenvectors are the vectors e_i, where e_i has a 1 in the i-th position and zeros elsewhere, satisfying D e_i = \lambda_i e_i. These eigenvectors form an eigenbasis for the , confirming that every diagonal matrix is diagonalizable over the field of its entries. The characteristic polynomial of D is given by \det(D - \lambda I) = \prod_{i=1}^n (d_{ii} - \lambda), whose roots are precisely the diagonal entries d_{ii}. This product form arises because D - \lambda I is also diagonal, making the the product of its diagonal elements. For each eigenvalue \lambda, the algebraic multiplicity is the number of times \lambda appears as a diagonal entry, and the geometric multiplicity equals this algebraic multiplicity, as the eigenspace is spanned by the corresponding vectors. This equality holds because the provides a full set of linearly independent eigenvectors for repeated eigenvalues. For example, consider the $2 \times 2 diagonal matrix D = \operatorname{diag}(1, 1). Here, \lambda = 1 is the only eigenvalue with algebraic multiplicity 2, and the eigenspace is the entire \mathbb{R}^2, spanned by all vectors since D v = v for any v. In contrast, for D = \operatorname{diag}(1, 2), the eigenvalues are 1 and 2, each with multiplicity 1, and the eigenvectors are e_1 = (1, 0)^T and e_2 = (0, 1)^T, respectively.

Invertibility and Norms

A diagonal matrix D = \operatorname{diag}(d_1, d_2, \dots, d_n) is invertible every diagonal entry d_i \neq 0 for i = 1, \dots, n. This condition ensures that D has full and no zero eigenvalues, making it nonsingular. If any d_i = 0, then D is singular, as its is zero and it maps the corresponding vector to the zero vector. The inverse of an invertible diagonal matrix D is given explicitly by D^{-1} = \operatorname{diag}(1/d_1, 1/d_2, \dots, 1/d_n), which is also a diagonal matrix. This follows from the fact that the product D \cdot [D^{-1}](/page/Inverse) yields the , since of diagonal matrices results in diagonal entries that are products of corresponding elements. For example, consider D = \operatorname{diag}(2, 3); its is \operatorname{diag}(1/2, 1/3), and D \cdot D^{-1} = I_2. Common matrix norms for a diagonal matrix D simplify due to its structure. The operator 2-norm, or spectral norm, \|D\|_2 equals the maximum of the diagonal entries, \max_i |d_i|. This is because the s of D are precisely |d_i|, and the 2-norm is the largest . The Frobenius norm is \|D\|_F = \sqrt{\sum_{i=1}^n d_i^2}, which treats D as a of its diagonal elements. The condition number of D with respect to the 2-norm is \kappa_2(D) = \|D\|_2 \|D^{-1}\|_2 = \frac{\max_i |d_i|}{\min_i |d_i|}, assuming D is invertible. This measures the sensitivity of solutions to linear systems involving D to perturbations; a value near 1 indicates well-conditioning. For instance, if D = \operatorname{diag}(1, 100), then \kappa_2(D) = 100, suggesting potential numerical instability in computations. Diagonal matrices are generally well-conditioned when their diagonal entries have comparable magnitudes, avoiding extremes that amplify errors.

Diagonalization and Representations

Diagonal Form in Eigenbases

In linear algebra, when a square matrix A possesses a basis consisting entirely of its eigenvectors, known as an eigenbasis, the representation of A in this basis takes the form of a diagonal matrix. This transformation simplifies the analysis of the matrix's action, as it scales each basis vector by the corresponding eigenvalue without mixing components. The is achieved through a , where the matrix P has columns that are the eigenvectors of A. In this setup, the diagonal matrix D has entries D_{ii} = \lambda_i, the eigenvalues associated with each eigenvector. The fundamental relation is then A = P D P^{-1}, where D = \operatorname{diag}(\lambda_1, \dots, \lambda_n), allowing A to be reconstructed from its diagonal form and the eigenbasis. A illustrative example is the 90-degree in the plane, given by \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}, which is not diagonal over the reals but diagonalizes over the numbers to D = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix}, with eigenvectors \begin{pmatrix} 1 \\ i \end{pmatrix} and \begin{pmatrix} 1 \\ -i \end{pmatrix}, respectively. This demonstrates how eigenbases reveal the diagonal structure underlying real-world transformations like rotations. If the eigenbasis is orthonormal, the matrix P becomes unitary (i.e., P^{-1} = P^*, the ), leading to a unitary P^* A P = D. This holds specifically for matrices, where A^* A = A A^*, ensuring the diagonal form preserves inner products and norms in the eigenbasis.

Relation to Diagonalizable Operators

A matrix A is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that A = P D P^{-1}. This decomposition expresses A in a basis where it acts by scaling the basis vectors, corresponding to the eigenvalues on the diagonal of D. A square matrix A of size n \times n is diagonalizable if and only if, for each eigenvalue, its algebraic multiplicity equals its geometric multiplicity. Equivalently, A has n linearly independent eigenvectors. The algebraic multiplicity of an eigenvalue \lambda is the multiplicity of \lambda as a root of the \det(A - \lambda I) = 0, while the geometric multiplicity is the of the corresponding eigenspace \ker(A - \lambda I). Over the real numbers, every is diagonalizable, and moreover, it is orthogonally diagonalizable, meaning P can be chosen as an so that A = P D P^T. This result, known as the for symmetric matrices, guarantees that symmetric matrices have real eigenvalues and an of eigenvectors. Diagonal matrices are always diagonalizable, as they are similar to themselves via the . In contrast, a block of size greater than $1 \times 1 with eigenvalue \lambda, such as \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}, is not diagonalizable because its geometric multiplicity is 1, less than its algebraic multiplicity of 2. The diagonal matrix D in a diagonalization is unique up to permutation of its diagonal entries, which are the eigenvalues of A. This uniqueness follows from the fact that the eigenvalues are determined by the , independent of the choice of P.

Applications

In Linear Systems and Computations

Diagonal matrices play a pivotal role in solving linear systems due to their simplicity. For a diagonal matrix D and b, the system Dx = b has the explicit x_i = b_i / d_{ii} for each component i, assuming all d_{ii} \neq 0. This component-wise division requires only O(n) arithmetic operations for an n \times n matrix, in stark contrast to the O(n^3) complexity of for general dense systems. In iterative methods for larger sparse systems, diagonal matrices serve as effective preconditioners. The decomposes a matrix A = D + (A - D), where D is the diagonal part, and iterates x^{(k+1)} = D^{-1}(b - (A - D)x^{(k)}), converging for strictly diagonally dominant A. Diagonal preconditioners, such as those based on the inverse of D, are also integrated into methods like conjugate gradients to reduce condition numbers and accelerate convergence, particularly for ill-conditioned systems arising in partial differential equations. Diagonal structures appear in decompositions that facilitate fast computations. For instance, circulant matrices—prevalent in and convolution operations—are diagonalized by the (DFT) matrix F, yielding F^{-1} C F = \Lambda where \Lambda is diagonal with eigenvalues as the DFT of the first row of C. This allows efficient solution of circulant systems via FFT, transforming the problem to O(n \log n) multiplications in the rather than O(n^2) direct operations. A practical application involves linear systems to enhance . By left-multiplying Ax = b with a diagonal matrix S (row ) or right-multiplying with T (column ), one can normalize rows or columns—e.g., to unit norm—without altering the , improving for subsequent solvers like . This is especially useful in equilibration to balance entries before or iterative methods.

In Spectral Theory and Physics

In spectral theory, the spectral theorem asserts that any Hermitian matrix can be unitarily diagonalized, meaning there exists a unitary matrix U such that U^\dagger A U = D, where D is a real diagonal matrix containing the eigenvalues of A. This decomposition is fundamental because it reveals the intrinsic structure of Hermitian operators, allowing them to be expressed in a basis where they act simply by scaling the basis vectors. In , observables are represented by Hermitian operators, ensuring real eigenvalues and the existence of an orthonormal eigenbasis via the . The operator \hat{H}, which governs the dynamics, is Hermitian and thus diagonal in its energy eigenbasis \{ |n\rangle \}, where \hat{H} |n\rangle = E_n |n\rangle and the E_n are the energy eigenvalues. This diagonal form simplifies the of the : the time-evolution operator is \hat{U}(t) = \exp(-i \hat{H} t / \hbar), which becomes diagonal in the energy basis as \langle n' | \hat{U}(t) | n \rangle = \delta_{n'n} \exp(-i E_n t / \hbar), allowing stationary states to evolve only by acquiring a . A canonical example is the , whose \hat{H} = \hbar \omega (\hat{a}^\dagger \hat{a} + 1/2) is diagonal in the number basis \{ |n\rangle \}, with eigenvalues E_n = \hbar \omega (n + 1/2) for n = 0, 1, 2, \dots. In this basis, the of an eigenstate is particularly straightforward, as |n(t)\rangle = \exp(-i E_n t / \hbar) |n\rangle, highlighting the oscillator's quantized levels and evolution without mixing between states. Beyond , diagonal matrices arise in through (PCA), where the of multivariate data is diagonalized to yield uncorrelated principal components. Specifically, PCA transforms the data via the eigenvectors of the , resulting in a diagonal covariance structure that achieves , with diagonal entries representing the variances along each principal axis. This is essential for reducing dimensionality while preserving signal variance, as originally formalized in statistical contexts. In , diagonal matrices describe of in coupled , such as mass-spring networks. For a of N coupled oscillators, the lead to a generalized eigenvalue problem involving the and matrices; solving it yields a matrix that diagonalizes the , transforming the coupled dynamics into N independent harmonic oscillators with frequencies given by the eigenvalues. This diagonal representation simplifies the analysis of vibrations, as each oscillates independently without energy exchange.

References

  1. [1]
    None
    ### Definition and Key Properties of Diagonal, Upper Triangular, and Lower Triangular Matrices
  2. [2]
    Diagonalization
    Diagonal matrices are the easiest kind of matrices to understand: they just scale the coordinate directions by their diagonal entries. In Section 5.3, we saw ...
  3. [3]
    [PDF] MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal ...
    If A = (aij) is a square matrix, then the entries aii are called diagonal entries. A square matrix is called diagonal if all non-diagonal entries are zeros. ...
  4. [4]
    [PDF] Notes on Eigenvalues and Eigenvectors - UT Computer Science
    Oct 31, 2014 · The eigenvalues of a diagonal matrix equal the values on its diagonal. The eigenvalues of a triangular matrix equal the values on its diagonal.
  5. [5]
    [PDF] Matrix Algebra: Determinants, Inverses, Eigenvalues - twister.ou.edu
    Diagonal matrices are a particular case of this rule. VII. The determinant of the product of two square matrices is the product of the individual determi- nants ...
  6. [6]
    Math 2331 – Linear Algebra - 5.3 Diagonalization
    An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = PDP-1, with D a diagonal matrix, if and only if the ...Missing: definition | Show results with:definition
  7. [7]
    3. Determinants and Diagonalization - Emory Mathematics
    Apr 2, 2011 · , which is one of the most important techniques in linear algebra. ... Diagonalizable matrices share many properties of their eigenvalues.
  8. [8]
    Diagonal Matrix -- from Wolfram MathWorld
    A diagonal matrix is a square matrix A of the form a_(ij)=c_idelta_(ij), where delta_(ij) is the Kronecker delta, c_i are constants, and i,j=1, 2, ..., n
  9. [9]
    Diagonal matrix - StatLect
    A diagonal matrix is a square matrix whose off-diagonal entries are all equal to zero. A diagonal matrix is at the same time: upper triangular;. lower ...Definition · Examples · Multiplication by a diagonal... · Products of diagonal matrices
  10. [10]
    Diagonal Matrix - an overview | ScienceDirect Topics
    A diagonal matrix is defined as a square matrix in which all off-diagonal entries are zero. (Note that a diagonal matrix is necessarily symmetric.)
  11. [11]
    Block Diagonal Matrix -- from Wolfram MathWorld
    A block diagonal matrix, also called a diagonal block matrix, is a square diagonal matrix in which the diagonal elements are square matrices of any size.
  12. [12]
    [PDF] Linear Algebra Review and Reference - CS229
    Sep 30, 2015 · Clearly, I = diag(1, 1,..., 1). 3.2 The Transpose. The transpose of a matrix results from “flipping” the rows and columns. Given a matrix.
  13. [13]
    [PDF] Canonical Correlation Analysis - The University of Texas at Dallas
    When provided with a vector, the diag operator gives a diagonal matrix with the elements of the vector as the diagonal elements of this matrix. When provided ...Missing: source | Show results with:source
  14. [14]
    II. A memoir on the theory of matrices - Journals
    A matrix is a set of quantities arranged in a square, like (a, b, c) | a', b', c' | | a", b", c" |. It arises from linear equations.Missing: 1850s primary
  15. [15]
    [PDF] Monte Carlo Methods for Estimating the Diagonal of a ... - Ilse Ipsen
    ... matrix whose diagonal elements are the elements of the vector \bfitx \in \BbbR n. In particular,. \scrD (A) \equiv diag(diag(A)) = I \circ A = \left[ a11 ...
  16. [16]
    Matrix Algebra - Review
    Scalar Matrix: A scalar matrix is a diagonal matrix whose diagonal elements are the same. A scalar matrix can be expressed as l I where l is the scalar.Missing: definition | Show results with:definition
  17. [17]
  18. [18]
    [PDF] Math 115a: Selected Solutions for HW 6 + More
    Dec 5, 2005 · 11: A scalar matrix is a square matrix of the form λI for some scalar λ; that is, a scalar matrix is a diagonal matrix in which all the.
  19. [19]
    [PDF] Introduction to Matrix Algebra
    Scalar matrix. For any scalar λ, the square matrix. S = k λ δij k = λI. (17) is called a scalar matrix. An example is. ⎛. ⎢. ⎢. ⎝. 3 0 0 0. 0 3 0 0. 0 0 3 0. 0 ...<|control11|><|separator|>
  20. [20]
    [PDF] Chapter 4 - Matrices and Matrix Rings
    Multiplying by a scalar is the same as multiplying by a scalar matrix, and thus scalar matrices commute with everything, i.e., if B ∈ Rn, (cIn)B = cB = Bc =.
  21. [21]
    [PDF] Linear Algebra Review and Reference
    Oct 16, 2007 · i, j entry of C is equal to the inner product of the ith row ... The eigenvalues of a diagonal matrix D = diag(d1,...dn) are just the ...
  22. [22]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    Sheldon Axler received his undergraduate degree from Princeton University, followed by a PhD in mathematics from the University of California at Berkeley.Missing: citation | Show results with:citation
  23. [23]
    [PDF] Linear Algebra With Applications - Emory Mathematics
    Jan 3, 2021 · Page 1. with Open Texts. LINEAR ALGEBRA with Applications. Open Edition. BASE TEXTBOOK. VERSION 2019 – REVISION A. ADAPTABLE | ACCESSIBLE ...Missing: citation | Show results with:citation
  24. [24]
    [PDF] Introduction to Matrix Exponentials
    Notice that, because At is a diagonal matrix, its powers are very easy to compute: we just take the powers of the diagonal entries (why? if you don't ...
  25. [25]
    [PDF] Matrix algebra for beginners, Part III the matrix exponential
    between matrices and numbers and try the same formula as in (6) to define the matrix exponential, ... powers of a diagonal matrix are easy to work out.<|control11|><|separator|>
  26. [26]
    [PDF] Matrix-Exponentials | MIT
    Sep 7, 2017 · Note that we have defined the exponential eΛt of a diagonal matrix Λ to be the diagonal matrix of the eλt values. • Equivalently, eAt is the ...Missing: powers | Show results with:powers
  27. [27]
    [PDF] Eigenvalues and Eigenvectors
    Theorem 5 The eigenvalue of a diagonal n × n matrix are the elements of its diagonal, and its eigenvectors are the standard basis vectors ei, with i = 1, ···,n ...
  28. [28]
    [PDF] EIGENVALUES AND EIGENVECTORS 1. Diagonalizable linear ...
    Their eigen- values are −1. More generally, if D is diagonal, the standard vectors form an eigenbasis with associated eigenvalues the corresponding entries on ...
  29. [29]
    Special Case: Diagonal Matrices - BOOKS
    The eigenvalues of a diagonal matrix are just the diagonal elements themselves. The eigenvectors of a diagonal matrix are just the standard basis.
  30. [30]
    [PDF] Characteristic polynomials • Tests for diagonalizability
    • Ex am ple The characteristic polynomial of a diagonal matrix will al- ways split. For instance the characteristic polynomial of. (\ a 00. 0 b 0. 00 c. \/ is.
  31. [31]
    The Characteristic Polynomial
    The characteristic polynomial of A is the function f ( λ ) given by f ( λ )= det ( A − λ I n ) . We will see below that the characteristic polynomial is in ...
  32. [32]
    [PDF] 5.3: Diagonalization Note that multiplying diagonal matrices is easy ...
    (λiI − A) ≤ ki]. b.) A is diagonalizable if and only if the geometric multiplicity is equal to the algebraic multiplicity for every eigenvalue.
  33. [33]
    [PDF] 11.2 Norms and Condition Numbers
    The norm of a diagonal matrix is its largest entry (using absolute values):. A = 2 0. 0 3 has norm kAk = 3. The eigenvector x = 0. 1 has Ax = 3x. The eigenvalue ...
  34. [34]
    [PDF] MATH 304 Linear Algebra Lecture 6: Diagonal matrices. Inverse ...
    Definition. The identity matrix (or unit matrix) is a diagonal matrix with all diagonal entries equal to 1. The n×n identity matrix ...
  35. [35]
    Inverse of Diagonal Matrix - Formula, Proof, Examples - Cuemath
    The inverse of a diagonal matrix is given by replacing the main diagonal elements of the matrix with their reciprocals.
  36. [36]
    [PDF] Chapter 4 Vector Norms and Matrix Norms - UPenn CIS
    Since n × n matrices can be multiplied, the idea behind matrix norms is that they should behave “well” with re- spect to matrix multiplication. Definition 4.3.
  37. [37]
    [PDF] Linear System of Equations: Conditioning - CS 357
    The condition number is a measure of sensitivity of solving a linear system of equations to variations in the input. The condition number of a matrix !: "#$% !
  38. [38]
    [PDF] Norms and Condition number - CSE, IIT Delhi
    For any matrix A, cond (A) ≥ 1. 2. For identity matrix, cond (I) = 1. 3. For any matrix A and scalar γ, cond (γ A) = cond (A). 4. For any diagonal matrix D ...
  39. [39]
    [PDF] Eigenvectors and Diagonalization
    Theorem: If a basis of eigenvectors {vj} for L exists, then the matrix D of L with respect to that basis is diagonal, and the jth diagonal element, Djj , ...
  40. [40]
    Diagonalization — Linear Algebra, Geometry, and Computation
    Diagonalization Requires Eigenvectors and Eigenvalues¶ ... Next we will show that to diagonalize a matrix, one must use the eigenvectors and eigenvalues of A.
  41. [41]
    7.2: Diagonalization - Mathematics LibreTexts
    Sep 17, 2022 · We define a diagonal matrix \(D\) as a matrix containing a zero in every entry except those on the main diagonal. More precisely, if \(d_{ij}\) ...
  42. [42]
    4.5 Diagonalization of complex matrices - Open Textbook Server
    ... matrices for which this can be done as well. A classic example of this is the rotation matrix . [ 0 1 − 1 0 ] . This is a real matrix with complex eigenvalues ...
  43. [43]
    [PDF] Complex Diagonalization
    Complex diagonalization is a continuation of diagonalization, where a matrix is diagonalizable if B = S−1AS is diagonal, and S contains an eigenbasis.
  44. [44]
    [PDF] Diagonalization by a unitary similarity transformation
    We demonstrate below that a matrix A is diagonalizable by a unitary similarity transformation if and only if A is normal. Before proceeding, we record a few ...
  45. [45]
    [PDF] Unitary Diagonalization of Matrices - UMD MATH
    A matrix A is diagonalizable with a unitary matrix if and only if A is normal. In other words: a) If A is normal there is a unitary matrix S so that S∗AS is ...
  46. [46]
    [PDF] 1 Spectral theorem - 1.1 Diagonalization
    1.2 Spectral theorem. Recall that matrix A is said to be diagonalizable if it similar to a diagonal matrix. That is, there exist an invertible matrix P and a ...
  47. [47]
    EIG-0050: Diagonalizable Matrices and Multiplicity - Ximera
    In this module we discuss algebraic multiplicity, geometric multiplicity, and their relationship to diagonalizability.
  48. [48]
    [PDF] Math 1553 Introduction to Linear Algebra
    ▻ 1 ≤ (geometric multiplicity) ≤ (algebraic multiplicity). ▻ An n × n matrix is diagonalizable if and only if the sum of the geometric multiplicities is n.
  49. [49]
    [PDF] Recitation 8. November 5 - MIT
    The matrix A is diagonalizable if and only if the geometric multiplicity of each eigenvalue of A is equal to the algebraic multiplicity of that eigenvalue. The ...
  50. [50]
    [PDF] SPECTRAL THEOREM Orthogonal Diagonalizable A diagonal ...
    (Spectral theorem) A ∈ Rn×n is orthogonally diagonalizable if and only if it is symmetric. An important consequence of this is that a symmetric n×n matrix has ( ...
  51. [51]
    [PDF] Orthogonally Diagonalizable Matrices
    The Spectral Theorem says that the symmetry of is also. E. E sufficient: a real symmetric matrix must be orthogonally diagonalizable. This is the part of the.
  52. [52]
    [PDF] MATH 423 Linear Algebra II Lecture 37: Jordan blocks. Jordan ...
    The Jordan block of dimensions 2×2 or higher is the simplest example of a square matrix that is not diagonalizable. Page 3. Jordan block. Definition. A Jordan ...
  53. [53]
    [PDF] MAS 5312 – Lecture for April 6, 2020 - University of Florida
    diagonal matrix is unique up to permutation of the diagonal entries. Page 10 ... If A ∈ Mn(K) has n distinct eigenvalues in K, A is diagonalizable,.
  54. [54]
    [PDF] 3 Jordan Canonical Forms - UC Berkeley math
    Recall that J = D + N, the sum of a diagonal matrix D (whose diagonal entries are eigenvalues of G), and of a nilpotent Jordan matrix N (in particular, Nm = 0),.
  55. [55]
    [PDF] 11.3 Iterative Methods and Preconditioners
    The important message is this: Jacobi's method works well when the main diagonal of A is large compared to the off-diagonal part. The diagonal part is S, the ...
  56. [56]
    [PDF] Iterative Methods for Sparse Linear Systems Second Edition
    ... Diagonal ... The exhaustive. 5-volume treatise by G. W. Stewart [274] is likely to become the de-facto reference in numerical linear algebra in years to come.
  57. [57]
    [PDF] Circulant-Matrices - MIT
    Sep 7, 2017 · This is the inverse discrete Fourier transform (IDFT). Note that this means that every circulant matrix C has orthogonal eigenvectors (the ...
  58. [58]
    [PDF] CS 450 – Numerical Analysis Chapter 2: Systems of Linear Equations
    Jan 28, 2019 · Scaling Linear Systems. ▻ In principle, solution to linear system is unaffected by diagonal scaling of matrix and right-hand-side vector.
  59. [59]
    [PDF] Lecture 3.26. Hermitian, unitary and normal matrices - Purdue Math
    So Hermitian and unitary matrices are always diagonalizable (though some eigenvalues can be equal). For example, the unit matrix is both Her- mitian and unitary ...
  60. [60]
    [PDF] Quantum Theory I, Lecture 3 Notes - MIT OpenCourseWare
    We leave the proof as an exercise. 3.1.3 Diagonalization of Hermitian Operators. Theorem 2. A Hermitian matrix Hij = <φi|H|φj> can always be diagonalized by a ...
  61. [61]
    [PDF] MITOCW | watch?v=Oi-JCJePLlc - MIT OpenCourseWare
    assume that this system that governs this time evolution has a time independent ... quantum mechanics ... basis in which it looks diagonal, its matrix is diagonal.
  62. [62]
    [PDF] 9. Harmonic Oscillator - MIT OpenCourseWare
    We have now started from a (physical) description of the h.o. Hamiltonian and made a change of basis in order to arrive at a simple diagonal form of it. Now ...
  63. [63]
    [PDF] PRINCIPAL COMPONENT ANALYSIS - UConn Health
    In a covariance matrix, diagonal elements represent variance; off- diagonal elements represent covariance. We want the off-diagonal elements to be zero in the ...
  64. [64]
    [PDF] Vibration, Normal Modes, Natural Frequencies, Instability
    The natural frequency is ωi; the components X i = (Xi1,Xi2) are called ”normal modes”. Normal Modes of Multi-Degree of Freedom Systems. Examining the first ” ...