In mathematics, particularly linear algebra, a zero matrix (also known as a null matrix) is an m \times n matrix where every entry is the number zero.[1] It functions as the additive identity in the vector space of all m \times n matrices, meaning that adding the zero matrix to any other matrix of the same dimensions yields the original matrix unchanged.[2]The zero matrix is typically denoted by O or simply $0, with subscripts indicating its dimensions if necessary (e.g., O_{m \times n}).[2] Unlike the scalar zero, which is unique, there exists a distinct zero matrix for each pair of dimensions m and n, allowing rectangular as well as square forms.[1] For instance, the $2 \times 2 zero matrix is \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}, while a $2 \times 3 example is \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}.[2]These attributes make the zero matrix indispensable for proofs involving matrix identities and vector space structures.[1]
Fundamentals
Definition
A matrix is a rectangular array of numbers arranged in rows and columns, where the individual numbers are called entries.[3] The zero matrix is the matrix analog of the scalar zero, serving as the additive identity in the set of all m × n matrices under matrix addition.[4]Formally, an m \times n zero matrix is defined as a matrix A = (a_{ij}) where every entry satisfies a_{ij} = [0](/page/0) for all i = 1, \dots, m and j = 1, \dots, n.[5] This definition applies to both rectangular matrices (where m \neq n) and square matrices (where m = n).[6] In the square case, it is often referred to as the zero matrix of order n.[7]
Notation and Examples
The zero matrix, also known as the null matrix, is commonly denoted by the symbol O or simply $0, with subscripts indicating its dimensions when necessary, such as O_{m \times n} for an m \times n matrix where all entries are zero.[2][8] For the zero vector, which is a special case of the zero matrix as a $1 \times n or n \times 1 matrix, the boldface notation \mathbf{0} is frequently used to distinguish it from the scalar zero.[9] In the context of square matrices of order n, it is often abbreviated as O_n or the zero matrix of order n.[2]A concrete example of a $2 \times 2 zero matrix is represented as:\begin{pmatrix}
0 & 0 \\
0 & 0
\end{pmatrix},where every element in the array is the number zero, forming a rectangular grid of zeros.[8] For a $1 \times 3 row vector zero matrix, it appears as [0 \ 0 \ 0], a single row filled entirely with zeros.[8] Similarly, a $3 \times 1 column vector zero matrix is:\begin{pmatrix}
0 \\
0 \\
0
\end{pmatrix},depicting a vertical array of three zeros.[8]In its most trivial form, the $1 \times 1 zero matrix is simply {{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, a single entry consisting of zero, which visually resembles the scalar zero but is treated as a matrix for consistency in array representations.[8] These notations and examples highlight the zero matrix's uniform structure as an array devoid of non-zero elements, facilitating its role in matrixtheory.[2]
Algebraic Properties
Additive Properties
The zero matrix serves as the additive identity in the algebra of matrices. For any m \times n matrix A = (a_{ij}) and the zero matrix O_{m \times n} = (0) of the same dimensions, the sum A + O_{m \times n} is defined element-wise as the matrix whose (i,j)-th entry is a_{ij} + 0 = a_{ij}, yielding A + O_{m \times n} = A.[10][11] Similarly, O_{m \times n} + A = A, since addition of matrices is commutative, with the (i,j)-th entry again $0 + a_{ij} = a_{ij}.[12][4]To outline the verification, consider the definition of matrix addition: if B = (b_{ij}), then (A + B)_{ij} = a_{ij} + b_{ij} for all i = 1, \dots, m and j = 1, \dots, n. Substituting b_{ij} = [0](/page/0) for all entries of O_{m \times n} confirms that each entry of the sum matches A, establishing the identity property across the entire set of m \times n matrices under addition.[13][14]The zero matrix is the unique element satisfying this additive identity property. Suppose there exists another matrix Z such that A + Z = A for every m \times n matrix A. Then, substituting A = O_{m \times n} gives O_{m \times n} + Z = O_{m \times n}, implying Z = O_{m \times n} by the cancellation law inherent in the element-wise addition structure.[15][16] This uniqueness holds within the abelian group formed by the set of all m \times n matrices with addition as the operation.[12]In relation to additive inverses, the zero matrix plays the central role as the result of adding any matrix to its inverse. For every m \times n matrix A, there exists a unique additive inverse -A = (-a_{ij}) such that A + (-A) = O_{m \times n}, where the element-wise sum yields zeros in every position, confirming O_{m \times n} as the identity that "neutralizes" the pair.[5][17] This property underscores the zero matrix's position as the group identity in matrix addition.[18]Commutativity of matrix addition directly reinforces the zero matrix's symmetric role: O_{m \times n} + A = A + O_{m \times n} = A follows from the general property that addition is independent of order, with element-wise operations ensuring the equality holds identically for the zero entries.[12][19]
Multiplicative Properties
The zero matrix exhibits absorbing properties under matrix multiplication, serving as a left and right zero divisor in the ring of matrices. Specifically, for any m \times n zero matrix \mathbf{O}_{m \times n} and any n \times p matrix \mathbf{B}, the product \mathbf{O}_{m \times n} \mathbf{B} = \mathbf{O}_{m \times p}. This follows from the element-wise definition of matrix multiplication: the (i,j)-th entry of the product is \sum_{k=1}^n (\mathbf{O}_{m \times n})_{i k} B_{k j} = \sum_{k=1}^n 0 \cdot B_{k j} = 0. Similarly, for any q \times m matrix \mathbf{C} and n \times p zero matrix \mathbf{O}_{n \times p}, \mathbf{C} \mathbf{O}_{n \times p} = \mathbf{O}_{q \times p}, as the (i,j)-th entry is \sum_{k=1}^n C_{i k} (\mathbf{O}_{n \times p})_{k j} = \sum_{k=1}^n C_{i k} \cdot 0 = 0. These relations highlight the zero matrix's role in annihilating any matrix it multiplies, producing another zero matrix regardless of the other operand's dimensions (provided they are compatible).[20]For square zero matrices, these multiplicative traits imply non-invertibility. An n \times n zero matrix \mathbf{O}_{n \times n} has determinant \det(\mathbf{O}_{n \times n}) = 0, as the presence of all-zero rows (or columns) yields a zero value under the determinant expansion.[20] Consequently, it is singular and possesses no multiplicative inverse, a property holding for all n \geq 1. This singularity stems directly from the zero matrix's absorbing multiplication, as supposing an inverse \mathbf{A}^{-1} existed would require \mathbf{O}_{n \times n} \mathbf{A}^{-1} = \mathbf{I}_{n \times n}, which equals \mathbf{O}_{n \times n}, contradicting the identity matrix unless n = 0 (the empty case, outside standard consideration).The zero matrix further demonstrates these multiplicative characteristics through its rank and nullity. For an n \times n zero matrix, the rank is 0, reflecting the absence of linearly independent rows or columns, as all are zero vectors. By the rank-nullity theorem, the nullity—the dimension of the null space, or solutions to \mathbf{O}_{n \times n} \mathbf{x} = \mathbf{0}—is then n, encompassing the entire \mathbb{R}^n (or ambient field space), since every vector satisfies the equation.[20]In contrast to its role as the additive identity (where \mathbf{A} + \mathbf{O} = \mathbf{A} preserves the other matrix), multiplication by the zero matrix universally yields the zero matrix, erasing the structure of the multiplicand. This distinction underscores the zero matrix's dual behavior in matrix algebra: neutral under addition but destructive under multiplication.
Applications and Contexts
In Linear Algebra
In linear algebra, the zero matrix \mathbf{0} of appropriate dimensions represents the zero transformation T: V \to W, defined by T(\mathbf{v}) = \mathbf{0} for all \mathbf{v} \in V, where V and W are vector spaces over the same field.[21] This transformation is linear, as it preserves addition and scalar multiplication: T(\mathbf{u} + \mathbf{v}) = \mathbf{0} = T(\mathbf{u}) + T(\mathbf{v}) and T(c\mathbf{u}) = \mathbf{0} = c T(\mathbf{u}) for any scalar c.[21] In a chosen basis for V and W, the matrix of this transformation is the zero matrix, since the images of the basis vectors of V are all the zero vector in W.[21]The kernel of the zero transformation is the entire domain V, as T(\mathbf{v}) = \mathbf{0} holds for every \mathbf{v} \in V.[21] Its image is the zero subspace \{\mathbf{0}\} of W, representing the trivial subspace in coordinate form via the zero matrix.For the homogeneous system \mathbf{Ax} = \mathbf{0} where \mathbf{A} is the zero matrix, every vector \mathbf{x} in the domain satisfies the equation, yielding the solution set as the entire space (infinitely many solutions).The n \times n zero matrix has characteristic polynomial \det(\mathbf{0} - \lambda I_n) = (-\lambda)^n, so all eigenvalues are $0 with algebraic multiplicity n.[22] This reflects its rank of zero, as referenced in its multiplicative properties.[22]
In Computing and Other Fields
In computing, zero matrices are fundamental for initializing data structures in numerical libraries, enabling efficient storage and subsequent operations on arrays. For instance, Python's NumPy library provides the np.zeros(shape, dtype=None)function, which returns a new array of the given shape filled entirely with zeros, optimizing memory allocation for matrix computations. Similarly, MATLAB's zeros(m,n)function creates an m-by-n matrix of zeros, commonly used to preallocate space for iterative algorithms or as placeholders in simulations.[23] These functions ensure type consistency and avoid uninitialized memory issues, facilitating vectorized operations on large datasets.Due to their uniform zero entries, zero matrices exemplify the benefits of sparse representations in computing, where storage focuses only on non-zero values and their positions to minimize memory footprint. In formats like MATLAB's sparse matrices, a zero matrix requires no value storage—only metadata for dimensions—potentially reducing usage from O(m*n) to O(1) for dense equivalents, which is crucial for handling massive datasets in scientific computing.[24] This approach extends to libraries like SciPy in Python, where sparse zero matrices serve as efficient defaults in solvers for linear systems or graph algorithms.In physics and engineering, the zero stress tensor represents a state of mechanical equilibrium in the absence of body forces and surface tractions, such as an unloaded solid or vacuum conditions with no shear or normal stresses.[25] In statistics, a zero covariance matrix signifies a degenerate distribution where all variables exhibit zero variance and perfect uncorrelation, implying the random vector is deterministically zero (assuming zero mean).[26]In graph theory, the adjacency matrix of a null graph (or empty graph with vertices but no edges) is the zero matrix, where all entries are zero, indicating the absence of connections between any pair of vertices.[27]