Determinant
In linear algebra, the determinant is a scalar (generally real or complex) value that can be defined for square matrices, for linear endomorphisms between finite-dimensional vector spaces of the same dimension, and for ordered families of n vectors in an n-dimensional vector space relative to a given basis. The determinant encodes information about invertibility, linear independence, orientation, and volume; the determinant can often be thought of as an "oriented volume" that corresponds to the factor by which a linear map changes the volume of an elementary parallelotope, with the sign of the determinant giving information about how the linear map changes orientations, and the determinant being zero if and only if the linear map is non-invertible and "squeezes" parallelotopes to lower dimensions.[1][2] For a square matrix, it is computed from the entries and encodes essential information about the matrix, including whether it is invertible and the factor by which the associated linear transformation scales volumes in the corresponding vector space.[3][4][5] Formally, the determinant can be defined axiomatically through its behavior under elementary row operations: it remains unchanged under row addition or replacement, multiplies by a scalar when a row is scaled by that factor, changes sign when two rows are swapped, and equals 1 for the identity matrix.[3] Alternatively, it admits an explicit formula known as the Leibniz formula, which sums over all permutations of the matrix indices with signs determined by the parity of each permutation, multiplied by the product of the corresponding entries. Specifically, \det(A) = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i=1}^n a_{i\sigma(i)}.[6][7] The concept of determinants emerged in the late 17th century with Gottfried Wilhelm Leibniz, who studied values associated with arrays of numbers for solving systems of equations, and was advanced in the 1750s by Gabriel Cramer, who introduced Cramer's rule for linear systems, though without full proofs for higher dimensions.[8] Key properties of determinants include multiplicativity, where the determinant of a product of matrices equals the product of their determinants, and the fact that the determinant of a matrix equals that of its transpose.[3] For triangular matrices, the determinant is simply the product of the diagonal entries, and a matrix is singular (non-invertible) if and only if its determinant is zero.[9] Computationally, determinants are often calculated via row reduction to upper triangular form, accounting for sign changes from row swaps and scaling factors, though direct expansion by minors or cofactor methods is used for small matrices.[3] Geometrically, the absolute value of the determinant measures the scaling factor of volumes (or areas in 2D, lengths in 1D) under the linear transformation defined by the matrix, while the sign indicates whether the transformation preserves or reverses orientation.[1] Applications span solving systems of linear equations via Cramer's rule, computing inverses and adjugates, analyzing eigenvalues through the characteristic polynomial, and even in physics for transformations like rotations and scalings.Basic Concepts
Two-by-two matrices
The determinant of a 2×2 matrix arises naturally in the context of solving systems of linear equations, where it indicates whether the system has a unique solution. For instance, consider the system ax + by = e and cx + dy = f; the condition ad - bc \neq 0 ensures the coefficient matrix is invertible, allowing a unique solution via methods like Cramer's rule. This scalar value, denoted \det(A) or |A|, encapsulates essential information about the matrix's behavior in linear transformations.[10] For a 2×2 matrix A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, the determinant is defined as \det(A) = ad - bc. This formula provides a straightforward computation for small matrices and serves as the foundation for generalizations to larger dimensions. To illustrate, consider the matrix \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}; its determinant is $1 \cdot 4 - 2 \cdot 3 = 4 - 6 = -2. Similarly, for \begin{pmatrix} 5 & 0 \\ 0 & 3 \end{pmatrix}, \det(A) = 5 \cdot 3 - 0 \cdot 0 = 15 . These examples highlight how the determinant can be positive, negative, or zero, reflecting different geometric and algebraic properties.[11][12] Geometrically, the determinant of a 2×2 matrix represents the signed area of the parallelogram formed by its column vectors in the plane. If the columns are vectors \mathbf{u} = (a, c) and \mathbf{v} = (b, d), then |\det(A)| gives the area of this parallelogram, while the sign indicates the orientation: positive for counterclockwise and negative for clockwise. This interpretation connects the algebraic formula to vector geometry, where a zero determinant implies the vectors are linearly dependent and span only a line, yielding zero area.[13]Initial properties
Building upon the determinant formula for 2×2 matrices, \det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc, several fundamental properties arise directly from algebraic expansion after applying elementary row or column operations. These properties are essential for computing determinants and understanding their behavior under matrix manipulations.[14] One key property is that the determinant remains unchanged when a multiple of one row is added to another row (or similarly for columns). To see this, consider the matrix A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} and add k times the first row to the second row, yielding B = \begin{pmatrix} a & b \\ c + ka & d + kb \end{pmatrix}. Expanding the determinant gives \det B = a(d + kb) - b(c + ka) = ad + kab - bc - kab = ad - bc = \det A. A symmetric calculation holds for column operations, confirming invariance under this type of shear transformation.[14] Another property is that scaling a single row (or column) by a nonzero scalar k multiplies the overall determinant by k. For the same 2×2 matrix A, scaling the second row by k produces C = \begin{pmatrix} a & b \\ kc & kd \end{pmatrix}, with \det C = a(kd) - b(kc) = k(ad - bc) = k \det A. This linearity in each row (or column) extends the multilinearity inherent in the determinant's definition.[14] Swapping two rows (or columns) multiplies the determinant by -1, reflecting the antisymmetric nature of the determinant. For A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, \det A = 1 \cdot 4 - 2 \cdot 3 = -2. Swapping the rows gives D = \begin{pmatrix} 3 & 4 \\ 1 & 2 \end{pmatrix}, and \det D = 3 \cdot 2 - 4 \cdot 1 = 2 = - \det A. This sign reversal upon interchange is a direct consequence of the expansion formula and holds analogously for columns.[14] These operational properties stem from the determinant's characterization as the unique alternating multilinear form on the columns (or rows) of an n \times n matrix such that the determinant of the identity matrix is 1. Alternating means it changes sign under row or column swaps, while multilinearity ensures additivity and homogeneity in each argument separately; this uniqueness guarantees that the 2×2 formula extends consistently to higher dimensions without ambiguity.[15] These initial properties facilitate efficient determinant computation via row reduction and tie into broader behaviors, such as multiplicativity for matrix products, where \det(AB) = \det A \cdot \det B.[14]Geometric Interpretation
Area and orientation in 2D
In two dimensions, the determinant of a 2×2 matrix whose columns (or rows) are the components of two vectors \mathbf{u} = (u_1, u_2) and \mathbf{v} = (v_1, v_2) in \mathbb{R}^2 provides the signed area of the parallelogram formed by these vectors as adjacent sides.[16] Specifically, this signed area is given by \det\begin{pmatrix} u_1 & v_1 \\ u_2 & v_2 \end{pmatrix} = u_1 v_2 - u_2 v_1, where the absolute value |\det| yields the unsigned area, representing the geometric scaling factor under the linear transformation defined by the matrix.[16] The sign of the determinant encodes the orientation of the parallelogram relative to the standard basis of the plane. A positive determinant indicates a counterclockwise orientation of the vectors, aligning with the right-hand rule convention, while a negative determinant signifies a clockwise orientation, effectively reflecting the parallelogram across the origin.[16] This signed interpretation distinguishes the determinant from mere area computation, capturing both magnitude and directional sense in the plane. For instance, consider the standard basis vectors \mathbf{e}_1 = (1, 0) and \mathbf{e}_2 = (0, 1), forming the matrix \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} with determinant 1, corresponding to a unit square of positive (counterclockwise) orientation.[16] Swapping the vectors to \mathbf{e}_2 and \mathbf{e}_1 yields the matrix \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} with determinant -1, indicating clockwise orientation and the same unsigned area of 1.[16] This geometric role connects directly to the two-dimensional cross product, where the scalar \mathbf{u} \times \mathbf{v} = u_1 v_2 - u_2 v_1 matches the determinant, and its absolute value |\mathbf{u} \times \mathbf{v}| equals the area of the parallelogram spanned by \mathbf{u} and \mathbf{v}.[17]Volume and orientation in higher dimensions
In higher dimensions, the geometric role of the determinant generalizes the signed area interpretation from two dimensions to the signed volume of parallelepipeds in \mathbb{R}^n. For a set of n vectors \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n in \mathbb{R}^n, the determinant of the matrix A with these vectors as columns equals the signed volume of the parallelepiped they span.[16] This volume is positive if the vectors form a positively oriented basis aligned with the standard orientation of \mathbb{R}^n, and negative if they form a negatively oriented basis, reflecting a reversal like a reflection transformation.[18] The sign of the determinant thus determines the orientation of the basis: a positive value indicates the same handedness as the standard basis, while a negative value signals an opposite handedness.[19] For instance, in three dimensions, the matrix \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} has determinant 1, corresponding to the standard right-handed orientation of the unit cube parallelepiped.[20] Swapping the second and third rows yields \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}, with determinant -1, indicating a left-handed orientation due to the odd permutation.[21] Under a linear transformation represented by an invertible matrix A, the absolute value |\det(A)| acts as the scaling factor for volumes: any n-dimensional volume in the domain is multiplied by |\det(A)| to obtain the image volume, preserving the geometric distortion up to orientation.[16] This factor is 1 if A is a rotation (volume-preserving) and greater than 1 if A expands volumes, as seen in shear or scaling transformations.[18]Formal Definition
Leibniz formula
The Leibniz formula provides an explicit expression for the determinant of an n \times n matrix A = (a_{i,j}) as a signed sum of products of its entries, taken one from each row and each column.[22] This formula arises from the historical work of Gottfried Wilhelm Leibniz in the late 17th century, who first conceptualized determinants in the context of solving linear systems.[23] The formula is given by \det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}, where S_n denotes the set of all permutations of \{1, 2, \dots, n\}, which has n! elements, and \operatorname{sgn}(\sigma) is the sign of the permutation \sigma, equal to +1 if \sigma is even (composed of an even number of transpositions) and -1 if odd.[7] Each term in the sum corresponds to a permutation \sigma, forming the product of entries a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)} along the "permuted diagonal," with the sign reflecting the permutation's parity to account for orientation.[24] To illustrate, consider the $3 \times 3 matrix A = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 5 & 6 & 0 \end{pmatrix}. The permutations in S_3 and their contributions are:- Identity \sigma = (1,2,3), even: \operatorname{sgn}(\sigma) = +1, product $1 \cdot 1 \cdot 0 = 0,
- \sigma = (1,3,2), odd: \operatorname{sgn}(\sigma) = -1, product $1 \cdot 4 \cdot 6 = 24, term -24,
- \sigma = (2,1,3), odd: \operatorname{sgn}(\sigma) = -1, product $2 \cdot 0 \cdot 0 = 0, term $0,
- \sigma = (2,3,1), even: \operatorname{sgn}(\sigma) = +1, product $2 \cdot 4 \cdot 5 = 40, term +40,
- \sigma = (3,1,2), even: \operatorname{sgn}(\sigma) = +1, product $3 \cdot 0 \cdot 6 = 0, term $0,
- \sigma = (3,2,1), odd: \operatorname{sgn}(\sigma) = -1, product $3 \cdot 1 \cdot 5 = 15, term -15.