Fact-checked by Grok 2 weeks ago

Hollow matrix

In , the term "hollow matrix" can refer to several related concepts, including sparse matrices, matrices with a large block of zeros, or square matrices whose entries are all zero (zero-diagonal matrices). This article primarily discusses zero-diagonal matrices. A zero-diagonal matrix is a square matrix whose entries are all equal to zero. This structure distinguishes it from general matrices by ensuring the —the sum of the diagonal elements—is identically zero. Zero-diagonal matrices frequently appear in linear algebra and , where they model relationships without self-loops, such as the of a simple , which has zeros on the diagonal to represent the absence of edges from a to itself. Symmetric zero-diagonal matrices, in particular, are real symmetric matrices with zero diagonals and off-diagonal entries determined by adjacencies, making them central to inverse eigenvalue problems (IEP) for . These problems seek to characterize possible spectra—sets of eigenvalues—for matrices constrained by a given structure, with applications in , , and social sciences for modeling interconnected systems. Key properties include the symmetry of the spectrum about the origin when the underlying graph is bipartite, ensuring that if \lambda is an eigenvalue, so is -\lambda. For nonnegative symmetric zero-diagonal matrices, the number of nonpositive eigenvalues can range from 2 to n-1 for an n \times n matrix, highlighting constraints on their spectral behavior. Zero-diagonal matrices also extend to specialized forms, such as hollow integer matrices with eigenvalues bounded from below by specific constants (e.g., \lambda^* \approx 2.01980 derived from the golden ratio), which are studied for their combinatorial and spectral bounds. Additionally, the Moore-Penrose pseudoinverse of symmetric zero-diagonal matrices relates to predistance and Euclidean distance matrices, aiding in geometric and optimization contexts.

Definitions

The term "hollow matrix" most commonly refers to a square matrix with zero diagonal entries, though it has been used in other specialized contexts such as non-commutative algebra and computational mathematics.

Zero-diagonal matrix

A hollow matrix is an n \times n square matrix A = (a_{ij}) where the diagonal entries satisfy a_{ii} = 0 for all i = 1, \dots, n. This structural property distinguishes hollow matrices by ensuring no non-zero elements appear on the main diagonal, while off-diagonal entries may be arbitrary. Any such matrix can be expressed as A = B - \diag(B), where B is a square matrix sharing the same off-diagonal elements as A, and \diag(B) denotes the extracting the of B; nevertheless, the defining characteristic remains the direct imposition of zeros on the diagonal. The term "hollow" evokes the visual absence of entries along the , akin to an empty core in the matrix structure. For instance, the $2 \times 2 \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} exemplifies a hollow matrix, as all skew-symmetric matrices inherently possess zero diagonals.

Block of zeros matrix

A hollow matrix in the block of zeros interpretation is an n \times n square matrix containing an r \times s submatrix of all zeros such that r + s > n. This size condition ensures the zero block overlaps and exceeds the matrix's linear dimension in a combined sense, effectively hollowing out a substantial interior portion and leaving non-zero entries to form a frame-like perimeter structure. The zero block may be positioned off-diagonal, at corners, or elsewhere, but its dominance in size—preventing the matrix from being fully dense or irreducible via row and column operations—defines its hollowness, in contrast to a completely which lacks any non-trivial structure. In non-commutative algebra, such matrices are characterized as not full, meaning their inner is less than n, allowing into a form revealing the large zero block. For instance, the following $3 \times 3 features a top-left $2 \times 2 zero block, with r=2, s=2, and $2+2=4>3: \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 2 \\ 3 & 4 & 5 \end{pmatrix} This arrangement hollows the matrix, confining non-zeros to the right column and bottom row, akin to a . This interpretation arises in the algebraic study of representations over free rings, where it denotes matrices with inherent structural deficiencies; it was introduced by M. Cohn in his foundational work on non-commutative relations. Such block-structured hollowness also connects briefly to contexts by enforcing patterned zero distributions that enhance computational efficiency in numerical methods.

Sparse matrix interpretation

In mathematics, the term "hollow matrix" has occasionally been used to denote a , which is characterized by having far fewer non-zero entries than the total number of elements, typically much less than n^2 for an n \times n matrix, enabling specialized storage and computational techniques that differ from those for dense matrices. This usage emphasizes efficiency in handling matrices with predominantly zero entries, distinguishing them from fully populated ones. The "hollow" aspect in this interpretation often suggests a structural pattern of sparsity, such as zeros filling the interior regions or along certain bands, with non-zero elements concentrated near the borders or in specific patterns like bands. For instance, permutation matrices, which contain exactly one non-zero entry per row and column, exemplify hollow sparse matrices due to their extreme sparsity while maintaining a structured form. A tridiagonal matrix with zeros on the main diagonal also qualifies as a hollow sparse matrix, illustrating banded sparsity where non-zeros are limited to the sub- and super-diagonals. This sparse interpretation overlaps briefly with zero-diagonal matrices, but applies more broadly to any low-density pattern without requiring a zero diagonal.

Properties of zero-diagonal matrices

Algebraic invariants

A zero-diagonal matrix, also known as a , has all its diagonal entries equal to zero. One fundamental algebraic invariant is the , defined as the of the diagonal entries: for an n \times n zero-diagonal A = (a_{ij}), \operatorname{tr}(A) = \sum_{i=1}^n a_{ii} = 0. The determinant of a zero-diagonal matrix is not necessarily zero, and no simple closed-form expression exists in general. For example, the $2 \times 2 zero-diagonal matrix \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} has determinant -1 \neq 0 and is thus invertible, while the zero matrix is singular with determinant zero and rank zero. In the special case of skew-symmetric zero-diagonal matrices (which are inherently hollow, as the diagonal must be zero to satisfy A^T = -A), the determinant of an even-dimensional matrix equals the square of its Pfaffian, a polynomial in the entries. The rank of a zero-diagonal matrix satisfies \operatorname{rank}(A) \leq n, with equality possible despite the zero diagonal; for instance, the matrix with zeros on the diagonal and ones elsewhere, J - I where J is the all-ones matrix and I the identity, has full rank n for n \geq 2 since its eigenvalues are n-1 (multiplicity 1) and -1 (multiplicity n-1), yielding a non-zero determinant. The set of zero-diagonal matrices is closed under addition: if A and B are zero-diagonal, then (A + B)_{ii} = a_{ii} + b_{ii} = 0 + 0 = 0, so A + B is zero-diagonal. However, it is not closed under : the i-th diagonal entry of AB is \sum_{k=1}^n a_{ik} b_{ki}, which need not be zero unless A and B have additional structure, such as one being diagonal (but zero-diagonal matrices have no non-zero diagonals). For skew-symmetric zero-diagonal matrices over fields of characteristic not equal to 2, the rank is always even.

Spectral properties

The spectral properties of zero-diagonal hollow matrices are significantly influenced by the absence of diagonal entries, which centers the relevant bounding regions at the origin in the . By the , every eigenvalue \lambda of an n \times n hollow matrix A = (a_{ij}) with a_{ii} = 0 for all i lies in the union of n closed disks D_i centered at 0 with radii R_i = \sum_{j \neq i} |a_{ij}|, the i-th row sum excluding the zero diagonal entry. Thus, |\lambda| \leq \max_{1 \leq i \leq n} R_i for every eigenvalue \lambda. This application immediately yields a bound on the \rho(A) = \max \{ |\lambda| : \lambda \text{ eigenvalue of } A \}, namely \rho(A) \leq \max_i R_i, reflecting the collective influence of the off-diagonal magnitudes. The value 0 is not an eigenvalue in general for hollow matrices; for example, the eigenvalues of the $2 \times 2 hollow matrix \begin{pmatrix} 0 & 1 \\ 2 & 0 \end{pmatrix} are \pm \sqrt{2}. However, specific subclasses guarantee its inclusion: real skew-symmetric hollow matrices (which are inherently hollow, as their diagonals vanish) have purely imaginary or zero eigenvalues, and for odd n, 0 must be an eigenvalue since \det(A) = \det(A^T) = \det(-A) = (-1)^n \det(A) implies \det(A) = 0. To illustrate the Gershgorin bound, consider the $5 \times 5 hollow matrix A = \begin{pmatrix} 0 & 2 & 6 & \frac{1}{3} & 4 \\ 2 & 0 & 4 & 8 & 0 \\ 9 & 4 & 0 & 2 & 933 \\ 1 & 4 & 4 & 0 & 6 \\ 7 & 9 & 23 & 8 & 0 \end{pmatrix}. The off-diagonal row sums of absolute values are approximately 12.333, 14, 948, 15, and 47, so \max_i R_i \approx 948 and all eigenvalues satisfy |\lambda| \leq 948. In symmetric nonnegative hollow matrices, nonpositive eigenvalues are particularly notable; any number k with $2 \leq k \leq n-1 of them is achievable, highlighting the flexibility of the within this constrained class.

Applications

Graph theory

In graph theory, the adjacency matrix of a simple undirected graph G without self-loops is a symmetric hollow matrix, with zeros along the diagonal and off-diagonal entries of 0 or 1 indicating the absence or presence of edges between distinct vertices. This zero-diagonal structure directly encodes the absence of loops, ensuring that the trace of the adjacency matrix A(G) is zero, which in turn implies that the sum of its eigenvalues is zero. The spectrum of A(G) thus yields the graph's eigenvalues, which capture key structural features such as and expansion properties. A representative example is the cycle graph C_n, whose adjacency matrix is a hollow circulant matrix. Its eigenvalues are $2 \cos \left( \frac{2\pi k}{n} \right) for integers k = 0, 1, \dots, n-1, reflecting the graph's regular and symmetric nature. Combinatorially, the powers of the adjacency matrix A(G)^m have entries that count the number of walks of length m between vertices, and the hollow diagonal prevents the inclusion of trivial self-steps in these counts. The graph Laplacian L = D - A(G), where D is the diagonal matrix of vertex degrees, further leverages this hollow form by isolating edge-based differences in its off-diagonal entries.

Distance geometry

In distance geometry, a arises from a set of points p_1, \dots, p_n in , where the entry D_{ij} = \|p_i - p_j\|^2 for i \neq j and D_{ii} = 0, rendering the matrix symmetric and . This zero-diagonal structure, inherent to self-distances being zero, ensures the matrix captures pairwise squared distances without redundant diagonal information. A predistance matrix extends this concept as a symmetric hollow P with nonnegative off-diagonal entries p_{ij} \geq 0, serving as an approximation of actual distances in applications like , where it models dissimilarities to be embedded into a lower-dimensional . Such matrices are central to realizing point configurations, as embeddability requires transforming P into a valid via checks on associated Gram matrices. Hollow matrices facilitate embedding points from distance data through Cayley-Menger determinants, which compute volumes of simplices from pairwise distances and enforce the condition to maintain zero self-distances in the realization. For a symmetric predistance matrix P, the Moore-Penrose pseudoinverse P^+ relates to the pseudoinverse of the corresponding centered matrix, exploiting the rank structure of hollow matrices for efficient computation. In optimization contexts, the hollow constraint is imposed in formulations for distance realization problems, ensuring feasible embeddings while minimizing deviations from predistances; these approaches trace to foundational developments in the , including metric embedding techniques for molecular conformations. Recent extensions include non-convex completion of matrices using Riemannian optimization, where the hollow symmetry is preserved in the upper-triangular sampling. In applications, hollow matrices have seen recent use in , such as graph neural networks for molecular , where the hollow encodes inter-node relations without self-connections.

References

  1. [1]
  2. [2]
    [PDF] Chapter 3. Basic Properties of Matrices
    Jun 15, 2020 · If all the principal diagonal elements are 0, the matrix is a hollow matrix. ... Gentle doesn't give a formal definition of a partitioned matrix.
  3. [3]
    Nonpositive Eigenvalues of Hollow, Symmetric, Nonnegative Matrices
    Among hollow, symmetric 𝑛 -by- 𝑛 nonnegative matrices, it is shown that any number 𝑘 , 2 ≤ 𝑘 ≤ 𝑛 − 1 of nonpositive eigenvalues is possible.<|separator|>
  4. [4]
    On symmetric hollow integer matrices with eigenvalues bounded ...
    Mar 15, 2025 · A hollow matrix is a square matrix whose diagonal entries are all equal to zero. Define ⁎ λ ⁎ = ρ 1 / 2 + ρ − 1 / 2 ≈ 2.01980 , where ρ is ...
  5. [5]
    Moore-Penrose inverse of a hollow symmetric matrix and a ... - EuDML
    Abstract. By a hollow symmetric matrix we mean a symmetric matrix with zero diagonal elements. The notion contains those of predistance matrix and Euclidean ...
  6. [6]
    [PDF] How to construct Minimal Linear Representations - arXiv
    Dec 7, 2020 · not full is associated over K to a linear hollow matrix. ... But then —by (4.9)— ¯TAk,k¯U would have an upper right block of zeros ...
  7. [7]
    On the factorization of non-commutative polynomials (in free ...
    ... hollow matrix, namely with zero last row. If we cannot produce a ( n − i ) × i block of zeros (by invertible transformations) in the first n − 1 rows of A ...
  8. [8]
    [PDF] Linearizing the Word Problem in (some) Free Fields
    P AQ has a 2 × 7 upper right block of zeros and the first two components of Pv are ... a hollow matrix, namely with zero last row. If we ... 1 × n block of zeros, ...
  9. [9]
    [PDF] The Matrix Cookbook
    Nov 15, 2012 · This cookbook is a collection of facts about matrices, including identities, approximations, and relations, for quick reference.Missing: hollow | Show results with:hollow
  10. [10]
    [PDF] Skew-symmetric matrices and the Pfaffian - UCSB Computer Science
    This bijection gives a purely combinatorial proof that the deter- minant of a zero axial skew-symmetric matrix is equal to the square of the Pfaffian. 1.0 ...
  11. [11]
    [PDF] Notes on antisymmetric matrices and the pfaffian
    An antisymmetric matrix has AT = -A. The pfaffian (pfA) is defined for even-dimensional matrices, and detA = [pf A]2.
  12. [12]
    [PDF] On the structure of skew symmetric operators - Ele-Math
    A finite-rank skew symmetric operator must have even rank. On the other hand ... [11] R. A. HORN AND C. R. JOHNSON,Matrix analysis, Corrected reprint of the 1985 ...
  13. [13]
    [PDF] Construction of real skew-symmetric matrices from interlaced ... - arXiv
    Dec 18, 2014 · Here we introduce similar inequalities to Cauchy interlacing inequalities for skew-symmetric matrices. Since all the eigenvalues of any skew- ...
  14. [14]
    Adjacency Matrix -- from Wolfram MathWorld
    For a simple graph with no self-loops, the adjacency matrix must have 0s on the diagonal. For an undirected graph, the adjacency matrix is symmetric.
  15. [15]
    The Adjacency Matrix | An Introduction to Algebraic Graph Theory
    The trace of a matrix is the sum of its diagonal entries and will be denoted by : Since all the diagonal entries of an adjacency matrix are all zero we have .
  16. [16]
    [PDF] Spectral and Algebraic Graph Theory - Computer Science
    One must introduce necessary linear algebra and show some interesting interpretations of graph eigenvalues. • One must derive the eigenvalues of some example ...
  17. [17]
    [PDF] Math 778S Spectral Graph Theory Handout #3: Eigenvalues of ...
    The complete graph Kn has the adjacency matrix J − I. Thus, Kn has an eigenvalue n − 1 of multiplicity 1 and. −1 of multiplicity n − 1. Eigenvalues of Cn: Let ...
  18. [18]
    [PDF] Matrices and Graphs 12.1 The Adjacency Matrix and Counting ...
    There are many connections between matrices and graphs. We consider sev- eral here: the powers of the adjacency matrix, cages, counting perfect matchings, and ...
  19. [19]
    [PDF] Graph Laplacian Tutorial
    L = 5T5. (Lf)(vi) = Pvj ~vi. (f(vi) − f(vj)). Connection between the Laplacian and the adjacency matrices: L = D − A. The degree matrix: D := Dii = d(vi). L =.
  20. [20]
    [PDF] Euclidean Distance Matrices: A Short Walk Through Theory ...
    hollow matrix D is an EDM if and only if it is negative semidefinite on {1}. ⊥. (on all vectors t such that t>1 = 0). Let us construct an orthonormal basis ...
  21. [21]
    [PDF] Solving Euclidean Distance Matrix Completion Problems Via ...
    In addition, a hollow matrix D is EDM if and only if B = T (D) is positive ... In addition, an important observation is that the ranks of the optimal solutions X, ...<|control11|><|separator|>
  22. [22]
    [PDF] arXiv:math/0610937v1 [math.CO] 30 Oct 2006
    Oct 30, 2006 · Let D be a predistance matrix and let k be an integer. Then, the following statements are equivalent: (1) D is an Euclidean distance matrix on G ...
  23. [23]
    [PDF] Distance Geometry: Theory, Algorithms, and Chemical Applications
    The five-point Cayley Menger determinants, on the other hand, are the Gramians of four interpoint vectors, and hence vanish identically whenever the distances ...