Fact-checked by Grok 2 weeks ago

Determinant

In linear algebra, the determinant is a scalar (generally real or complex) value that can be defined for square matrices, for linear endomorphisms between finite-dimensional vector spaces of the same dimension, and for ordered families of n vectors in an n-dimensional vector space relative to a given basis. The determinant encodes information about invertibility, linear independence, orientation, and volume; the determinant can often be thought of as an "oriented volume" that corresponds to the factor by which a linear map changes the volume of an elementary parallelotope, with the sign of the determinant giving information about how the linear map changes orientations, and the determinant being zero if and only if the linear map is non-invertible and "squeezes" parallelotopes to lower dimensions. For a square matrix, it is computed from the entries and encodes essential information about the matrix, including whether it is invertible and the factor by which the associated linear transformation scales volumes in the corresponding . Formally, the determinant can be defined axiomatically through its behavior under elementary row operations: it remains unchanged under row addition or replacement, multiplies by a scalar when a row is scaled by that factor, changes sign when two rows are swapped, and equals 1 for the . Alternatively, it admits an explicit formula known as the Leibniz formula, which sums over all of the matrix indices with signs determined by the of each permutation, multiplied by the product of the corresponding entries. Specifically, \det(A) = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i=1}^n a_{i\sigma(i)}. The concept of determinants emerged in the late 17th century with , who studied values associated with arrays of numbers for solving systems of equations, and was advanced in the 1750s by Gabriel Cramer, who introduced for linear systems, though without full proofs for higher dimensions. Key properties of determinants include multiplicativity, where the determinant of a product of equals the product of their determinants, and the fact that the determinant of a matrix equals that of its . For triangular matrices, the determinant is simply the product of the diagonal entries, and a matrix is singular (non-invertible) its determinant is zero. Computationally, determinants are often calculated via row reduction to upper triangular form, accounting for sign changes from row swaps and scaling factors, though direct expansion by minors or cofactor methods is used for small matrices. Geometrically, the absolute value of the determinant measures the scaling factor of volumes (or areas in 2D, lengths in 1D) under the linear transformation defined by the matrix, while the sign indicates whether the transformation preserves or reverses orientation. Applications span solving systems of linear equations via Cramer's rule, computing inverses and adjugates, analyzing eigenvalues through the characteristic polynomial, and even in physics for transformations like rotations and scalings.

Basic Concepts

Two-by-two matrices

The determinant of a 2×2 matrix arises naturally in the context of solving systems of linear equations, where it indicates whether the system has a unique solution. For instance, consider the system ax + by = e and cx + dy = f; the condition ad - bc \neq 0 ensures the coefficient matrix is invertible, allowing a unique solution via methods like Cramer's rule. This scalar value, denoted \det(A) or |A|, encapsulates essential information about the matrix's behavior in linear transformations. For a 2×2 matrix A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, the determinant is defined as \det(A) = ad - bc. This formula provides a straightforward for small matrices and serves as the foundation for generalizations to larger dimensions. To illustrate, consider the matrix \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}; its determinant is $1 \cdot 4 - 2 \cdot 3 = 4 - 6 = -2. Similarly, for \begin{pmatrix} 5 & 0 \\ 0 & 3 \end{pmatrix}, \det(A) = 5 \cdot 3 - 0 \cdot 0 = 15 . These examples highlight how the determinant can be positive, negative, or zero, reflecting different geometric and algebraic properties. Geometrically, the determinant of a 2×2 matrix represents the signed area of the parallelogram formed by its column vectors in the plane. If the columns are vectors \mathbf{u} = (a, c) and \mathbf{v} = (b, d), then |\det(A)| gives the area of this parallelogram, while the sign indicates the orientation: positive for counterclockwise and negative for clockwise. This interpretation connects the algebraic formula to vector geometry, where a zero determinant implies the vectors are linearly dependent and span only a line, yielding zero area.

Initial properties

Building upon the determinant formula for 2×2 matrices, \det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc, several fundamental properties arise directly from algebraic expansion after applying elementary row or column operations. These properties are essential for computing determinants and understanding their behavior under matrix manipulations. One key property is that the determinant remains unchanged when a multiple of one row is added to another row (or similarly for columns). To see this, consider the matrix A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} and add k times the first row to the second row, yielding B = \begin{pmatrix} a & b \\ c + ka & d + kb \end{pmatrix}. Expanding the determinant gives \det B = a(d + kb) - b(c + ka) = ad + kab - bc - kab = ad - bc = \det A. A symmetric calculation holds for column operations, confirming invariance under this type of transformation. Another property is that scaling a single row (or column) by a nonzero scalar k multiplies the overall determinant by k. For the same 2×2 matrix A, scaling the second row by k produces C = \begin{pmatrix} a & b \\ kc & kd \end{pmatrix}, with \det C = a(kd) - b(kc) = k(ad - bc) = k \det A. This linearity in each row (or column) extends the multilinearity inherent in the determinant's definition. Swapping two rows (or columns) multiplies the determinant by -1, reflecting the antisymmetric nature of the determinant. For A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, \det A = 1 \cdot 4 - 2 \cdot 3 = -2. Swapping the rows gives D = \begin{pmatrix} 3 & 4 \\ 1 & 2 \end{pmatrix}, and \det D = 3 \cdot 2 - 4 \cdot 1 = 2 = - \det A. This sign reversal upon interchange is a direct consequence of the expansion formula and holds analogously for columns. These operational properties stem from the determinant's characterization as the unique alternating on the columns (or rows) of an n \times n such that the determinant of the is 1. Alternating means it changes sign under row or column swaps, while multilinearity ensures additivity and homogeneity in each argument separately; this uniqueness guarantees that the 2×2 extends consistently to higher dimensions without ambiguity. These initial properties facilitate efficient determinant computation via row reduction and tie into broader behaviors, such as multiplicativity for products, where \det(AB) = \det A \cdot \det B.

Geometric Interpretation

Area and orientation in

In two dimensions, the determinant of a whose columns (or rows) are the components of two vectors \mathbf{u} = (u_1, u_2) and \mathbf{v} = (v_1, v_2) in \mathbb{R}^2 provides the signed area of the formed by these vectors as adjacent sides. Specifically, this signed area is given by \det\begin{pmatrix} u_1 & v_1 \\ u_2 & v_2 \end{pmatrix} = u_1 v_2 - u_2 v_1, where the |\det| yields the unsigned area, representing the geometric scaling factor under the linear transformation defined by the . The sign of the determinant encodes the of the relative to the of the . A positive determinant indicates a counterclockwise of the vectors, aligning with the convention, while a negative determinant signifies a , effectively reflecting the across the . This signed interpretation distinguishes the determinant from mere area computation, capturing both magnitude and directional sense in the . For instance, consider the vectors \mathbf{e}_1 = (1, 0) and \mathbf{e}_2 = (0, 1), forming the matrix \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} with determinant 1, corresponding to a of positive (counterclockwise) . Swapping the vectors to \mathbf{e}_2 and \mathbf{e}_1 yields the matrix \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} with determinant -1, indicating and the same unsigned area of 1. This geometric role connects directly to the two-dimensional , where the scalar \mathbf{u} \times \mathbf{v} = u_1 v_2 - u_2 v_1 matches the determinant, and its |\mathbf{u} \times \mathbf{v}| equals the area of the spanned by \mathbf{u} and \mathbf{v}.

Volume and orientation in higher dimensions

In higher dimensions, the geometric role of the determinant generalizes the signed area interpretation from two dimensions to the signed of in \mathbb{R}^n. For a set of n vectors \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n in \mathbb{R}^n, the determinant of the matrix A with these vectors as columns equals the signed of the parallelepiped they span. This is positive if the vectors form a positively basis aligned with the of \mathbb{R}^n, and negative if they form a negatively basis, reflecting a reversal like a reflection transformation. The sign of the determinant thus determines the orientation of the basis: a positive value indicates the same handedness as the standard basis, while a negative value signals an opposite handedness. For instance, in three dimensions, the matrix \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} has determinant 1, corresponding to the standard right-handed orientation of the unit cube parallelepiped. Swapping the second and third rows yields \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}, with determinant -1, indicating a left-handed orientation due to the odd permutation. Under a linear represented by an A, the |\det(A)| acts as the factor for volumes: any n-dimensional volume in the domain is multiplied by |\det(A)| to obtain the image volume, preserving the geometric distortion up to orientation. This factor is 1 if A is a (volume-preserving) and greater than 1 if A expands volumes, as seen in or transformations.

Formal Definition

Leibniz formula

The Leibniz formula provides an explicit expression for the determinant of an n \times n A = (a_{i,j}) as a signed of products of its entries, taken one from each row and each column. This formula arises from the historical work of in the late 17th century, who first conceptualized determinants in the context of solving linear systems. The formula is given by \det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}, where S_n denotes the set of all permutations of \{1, 2, \dots, n\}, which has n! elements, and \operatorname{sgn}(\sigma) is the sign of the permutation \sigma, equal to +1 if \sigma is even (composed of an even number of transpositions) and -1 if odd. Each term in the sum corresponds to a permutation \sigma, forming the product of entries a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)} along the "permuted diagonal," with the sign reflecting the permutation's parity to account for orientation. To illustrate, consider the $3 \times 3 matrix A = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 5 & 6 & 0 \end{pmatrix}. The permutations in S_3 and their contributions are:
  • Identity \sigma = (1,2,3), even: \operatorname{sgn}(\sigma) = +1, product $1 \cdot 1 \cdot 0 = 0,
  • \sigma = (1,3,2), odd: \operatorname{sgn}(\sigma) = -1, product $1 \cdot 4 \cdot 6 = 24, term -24,
  • \sigma = (2,1,3), odd: \operatorname{sgn}(\sigma) = -1, product $2 \cdot 0 \cdot 0 = 0, term $0,
  • \sigma = (2,3,1), even: \operatorname{sgn}(\sigma) = +1, product $2 \cdot 4 \cdot 5 = 40, term +40,
  • \sigma = (3,1,2), even: \operatorname{sgn}(\sigma) = +1, product $3 \cdot 0 \cdot 6 = 0, term $0,
  • \sigma = (3,2,1), odd: \operatorname{sgn}(\sigma) = -1, product $3 \cdot 1 \cdot 5 = 15, term -15.
Summing these yields \det(A) = 0 - 24 + 0 + 40 + 0 - 15 = 1. This matches the simpler $2 \times 2 case, where the reduces to ad - bc for \begin{pmatrix} a & b \\ c & d \end{pmatrix}. The Leibniz satisfies the properties of the determinant as an alternating . Multilinearity holds because the is linear in each row: scaling the k-th row by a scalar c multiplies every product term involving that row by c, thus the entire determinant by c, and adding rows distributes similarly over the . The alternating property follows from the signs: interchanging two rows corresponds to composing each with a , which flips the of \sigma and thus the of every term, negating the determinant; if two rows are identical, half the terms cancel with their counterparts, yielding zero. These properties uniquely characterize the determinant up to normalization.

Extension to n × n matrices

The axiomatic characterization of the determinant provides an abstract foundation for extending the concept from small matrices to arbitrary n \times n matrices over a field, such as the real numbers \mathbb{R} or complex numbers \mathbb{C}. Specifically, the determinant is defined as the unique function \det: V^n \to F, where V is an n-dimensional vector space over the field F, that satisfies three key properties: multilinearity in the arguments (i.e., linear in each column when the others are fixed), alternation (i.e., \det = 0 if any two columns are identical), and normalization (\det(I) = 1, where I is the identity matrix). This axiomatic approach ensures the determinant is well-defined for any with entries in F, capturing the signed volume of the parallelepiped spanned by the column vectors without relying on explicit summation formulas. The uniqueness theorem states that any function satisfying these axioms coincides with the standard determinant, providing a rigorous justification for its extension to higher dimensions. For instance, consider the standard $2 \times 2 matrix \begin{pmatrix} a & b \\ c & d \end{pmatrix}. Its determinant ad - bc is multilinear, as scaling one column by a scalar \lambda \in F scales the value by \lambda, while keeping the other fixed; it is alternating, vanishing if the two columns are identical (i.e., a = b and c = d); and it equals 1 for the identity matrix. This verification aligns with the axioms, confirming the extension's consistency for n=2. The Leibniz formula realizes these axioms explicitly, but the axiomatic view emphasizes their universal applicability across fields.

Core Properties

Characterization and consequences

The determinant of an n \times n matrix A can be characterized as the unique function \det: M_n(\mathbb{R}) \to \mathbb{R} that satisfies three axioms: multilinearity in the rows, alternating property (i.e., \det(A) = 0 if any two rows are identical), and normalization \det(I_n) = 1, where I_n is the . These axioms fully determine the determinant, distinguishing it from other multilinear forms on matrices. A key consequence is that \det(A) = 0 A is singular, meaning its columns (or rows) are linearly dependent. Conversely, if \det(A) \neq 0, then A is . This criterion provides a direct test for invertibility without computing the inverse explicitly. From multilinearity, the determinant exhibits homogeneity: scaling the entire by a scalar k yields \det(kA) = k^n \det(A), since each of the n rows is scaled by k. For example, consider the \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix}. By multilinearity and the alternating property (which forces off-diagonal contributions to vanish), the determinant is the product of the diagonal entries: \det = 1 \cdot 2 \cdot 3 = 6. These axioms also motivate the multiplicativity \det(AB) = \det(A) \det(B), which follows from interpreting the determinant as a change-of-basis scaling factor and will be derived in detail later.

Transpose and multiplicativity

One fundamental of the determinant is that it remains unchanged under of the matrix. For an n \times n matrix A, \det(A^T) = \det(A). This follows directly from the Leibniz formula, which expresses the determinant as a sum over all permutations \sigma in the S_n: \det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}. For the transpose A^T, the entries are a^T_{i,j} = a_{j,i}, so \det(A^T) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{\sigma(i),i}. Relabeling the summation index by the inverse permutation \tau = \sigma^{-1} (noting that \operatorname{sgn}(\tau) = \operatorname{sgn}(\sigma^{-1}) = \operatorname{sgn}(\sigma)), the product becomes \prod_{i=1}^n a_{i,\tau(i)}, which is precisely the Leibniz expansion for \det(A). Thus, the sums coincide, proving the equality. Another core property is the multiplicativity of the determinant: for any n \times n matrices A and B, \det(AB) = \det(A) \det(B). This arises from the multilinearity of the determinant, which states that \det is linear in each column (or row) when the others are fixed. Specifically, if the columns of AB are expressed as AB = [A \mathbf{b}_1, \dots, A \mathbf{b}_n] where \mathbf{b}_j are the columns of B, multilinearity allows expanding \det(AB) as a sum of terms each involving \det(A) scaled by entries from B, effectively factoring out \det(A) and yielding the product form \det(A) \det(B). This property holds over any and is foundational for matrix algebra. The multiplicativity endows the determinant with group-theoretic significance. The map \det: \mathrm{GL}(n, \mathbb{R}) \to \mathbb{R}^\times, where \mathrm{GL}(n, \mathbb{R}) is the general linear group of invertible n \times n real matrices under multiplication and \mathbb{R}^\times is the of nonzero reals, is a homomorphism. It sends the to 1 and preserves the group operation via \det(AB) = \det(A) \det(B); the is the \mathrm{SL}(n, \mathbb{R}) of matrices with determinant 1. This homomorphism is surjective, as scalar matrices achieve any nonzero real value. To illustrate multiplicativity, consider $2 \times 2 matrices A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} and B = \begin{pmatrix} e & f \\ g & h \end{pmatrix}. Then \det(A) = ad - bc and \det(B) = eh - fg. The product AB = \begin{pmatrix} ae + bg & af + bh \\ ce + dg & cf + dh \end{pmatrix} has determinant (ae + bg)(cf + dh) - (af + bh)(ce + dg) = (ad - bc)(eh - fg), verifying \det(AB) = \det(A) \det(B).

Laplace expansion and adjugate

The , also known as cofactor expansion, provides a recursive method for computing the determinant of an n \times n by expressing it as a of the determinants of smaller (n-1) \times (n-1) submatrices. This technique, developed by in the late , allows the determinant to be calculated by selecting any fixed row or column and summing the products of the matrix entries in that row (or column) with their corresponding signed minors. For an n \times n matrix A = (a_{ij}), the minor M_{ij} is defined as the determinant of the submatrix obtained by deleting the i-th row and j-th column from A. The cofactor C_{ij} is then given by C_{ij} = (-1)^{i+j} M_{ij}, which incorporates the sign alternation necessary to preserve the antisymmetric properties of the determinant. The Laplace expansion along the i-th row states that \det(A) = \sum_{j=1}^n a_{ij} C_{ij} = \sum_{j=1}^n (-1)^{i+j} a_{ij} \det(M_{ij}), and a similar formula holds for expansion along any fixed column j. This expansion is valid for any choice of row or column, making it a flexible tool for computation, though its recursive nature leads to O(n!) complexity for large n, rendering it inefficient compared to modern methods. The cofactors also play a central role in defining the (or classical adjoint), denoted \operatorname{adj}(A), which is the transpose of the cofactor matrix: \operatorname{adj}(A) = (C_{ji})_{i,j=1}^n. A fundamental property is that A \cdot \operatorname{adj}(A) = \operatorname{adj}(A) \cdot A = \det(A) I_n, where I_n is the n \times n identity matrix; this follows from expanding the entries of the product using the Laplace formula along rows and columns. Consequently, if A is invertible (i.e., \det(A) \neq 0), the inverse is given explicitly by A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A). This formula provides a direct algebraic expression for the inverse in terms of determinants, highlighting the deep connection between determinants and matrix invertibility, though it is primarily theoretical for dimensions beyond small n due to computational cost. To illustrate, consider the $3 \times 3 matrix A = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix}. Expanding \det(A) along the first row yields \det(A) = a C_{11} + b C_{12} + c C_{13}, where C_{11} = (-1)^{1+1} \det\begin{pmatrix} e & f \\ h & i \end{pmatrix} = ei - fh, C_{12} = (-1)^{1+2} \det\begin{pmatrix} d & f \\ g & i \end{pmatrix} = -(di - fg), and C_{13} = (-1)^{1+3} \det\begin{pmatrix} d & e \\ g & h \end{pmatrix} = dh - eg. Substituting these gives the standard expansion \det(A) = a(ei - fh) - b(di - fg) + c(dh - eg). The adjugate \operatorname{adj}(A) would then be the transpose of the matrix of these cofactors, enabling computation of A^{-1} via the formula above if \det(A) \neq 0.

Block matrices and special theorems

Block matrices, which are partitioned into submatrices or blocks, allow for specialized formulas to compute their determinants, particularly when the blocks satisfy certain invertibility conditions. Consider a block matrix of the form M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}, where A is a square of size p \times p, B is p \times q, C is q \times p, and D is q \times q. The determinant of M is given by \det(M) = \det(A) \cdot \det(D - C A^{-1} B). This formula is derived using block or properties of the and holds over fields where the relevant inverses exist. A notable special theorem for block matrices is , which relates the determinants of matrices involving products of rectangular blocks. For matrices A of size m \times n and B of size n \times m, the states that \det(I_m + A B) = \det(I_n + B A), where I_k denotes the k \times k . First stated by in 1857 without proof, this result has applications in linear algebra and theory, and multiple proofs exist based on properties of permanents or identities. Simple cases illustrate these ideas. For a diagonal matrix M = \begin{pmatrix} A & 0 \\ 0 & [D](/page/D) \end{pmatrix}, where the off-diagonal blocks are zero, the determinant simplifies to \det(M) = \det(A) \cdot \det([D](/page/D)). Similarly, for a triangular matrix M = \begin{pmatrix} A & B \\ 0 & [D](/page/D) \end{pmatrix} or M = \begin{pmatrix} A & 0 \\ C & [D](/page/D) \end{pmatrix}, the determinant is also \det(M) = \det(A) \cdot \det([D](/page/D)), as the off-diagonal blocks do not affect the product of the diagonal determinants. More generally, the determinant of any triangular matrix (block or scalar) equals the product of its diagonal entries. These formulas connect directly to the , defined for the M above as S = D - C A^{-1} B when A is invertible. The block determinant formula expresses \det(M) in terms of \det(A) and \det(S), making Schur complements essential in for solving systems, analyzing , and performing matrix factorizations like Cholesky or decompositions on block structures.

Connections to Linear Algebra

Eigenvalues and characteristic polynomial

The characteristic polynomial of an n \times n matrix A over the complex numbers is defined as p_A(\lambda) = \det(\lambda I_n - A), where I_n is the n \times n . This is a of degree n whose roots are precisely the eigenvalues of A, counting algebraic multiplicities. An alternative expression for the arises from the theory of exterior powers: p_A(\lambda) = \sum_{k=0}^n (-1)^k \operatorname{tr}(\wedge^k A) \lambda^{n-k}, where \wedge^k A denotes the induced action of A on the k-th exterior power of the underlying , and \operatorname{tr} is the trace. The coefficients in this expansion are the elementary symmetric functions of the eigenvalues of A. By applied to the , if \lambda_1, \dots, \lambda_n are the eigenvalues of A (with multiplicity), then their equals the of A, and their product equals \det(A). In particular, \det(A) = \prod_{i=1}^n \lambda_i. This establishes a direct computational link between the determinant and the eigenvalues, showing that the determinant measures the (signed) volume scaling factor associated with the product of the stretching factors along the principal directions defined by the eigenvectors. For a concrete illustration, consider a $2 \times 2 A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}. The is p_A(\lambda) = \det\begin{pmatrix} \lambda - a & -b \\ -c & \lambda - d \end{pmatrix} = \lambda^2 - (a+d)\lambda + (ad - bc). The roots \lambda_1 and \lambda_2 satisfy \lambda_1 + \lambda_2 = a + d = \operatorname{tr}(A) and \lambda_1 \lambda_2 = ad - bc = \det(A), verifying that the determinant equals the product of the eigenvalues.

Trace and derivatives

The derivative of the determinant of a matrix-valued A(t) that is invertible at t is given by : \frac{d}{dt} \det(A(t)) = \det(A(t)) \cdot \operatorname{tr}(A^{-1}(t) A'(t)), where A'(t) denotes the matrix of partial s with respect to t, and \operatorname{tr} is the . This expression, often called the logarithmic of the determinant, arises because the formula can be rewritten as \frac{d}{dt} \log \det(A(t)) = \operatorname{tr}(A^{-1}(t) A'(t)), highlighting the connection between the determinant's growth rate and the trace of the relative change in A(t). A concrete illustration occurs when considering the determinant of the perturbed by a scalar multiple of a fixed A, namely \det(I + tA). Differentiating this at t = 0 yields \operatorname{tr}(A), as the formula simplifies to the under the initial condition where A(0) = I. This example underscores the first-order sensitivity of the determinant to changes, linking it directly to the as a linear functional on matrices. In the context of Lie groups, the determinant of the matrix exponential provides another bridge to the trace: for any square matrix B, \det(\exp(B)) = \exp(\operatorname{tr}(B)). This identity follows from applying Jacobi's formula along the curve A(t) = \exp(tB), and it plays a key role in the exponential map from the Lie algebra (matrices with the trace as a linear invariant) to the Lie group (matrices with the determinant as a multiplicative character). For higher-order derivatives of \det(A(t)), expressions can be derived using for the chain rule on composite functions, treating the determinant as a composition involving the exponential of the log-determinant. Specifically, the nth derivative involves sums over partitions of n, incorporating traces of products of the derivatives A^{(k)}(t) weighted by and powers of \det(A(t)). These formulas reveal the determinant's nonlinear response to perturbations but grow combinatorially complex for large n.

Upper and lower bounds

Hadamard's inequality provides an upper bound on the absolute value of the determinant of an n \times n real A in terms of the norms of its columns: |\det(A)| \leq \prod_{i=1}^n \|\mathbf{a}_{\cdot i}\|_2, where \mathbf{a}_{\cdot i} denotes the i-th column vector of A. This bound, first proved by , is achieved with equality if and only if the columns of A are pairwise orthogonal. Another useful upper bound expresses the determinant in terms of the \ell_1-norms of the rows: |\det(A)| \leq \prod_{i=1}^n \sum_{j=1}^n |a_{ij}|. This inequality arises because the absolute value of the determinant is at most the permanent of A, defined as the sum of the absolute values of the terms in the Leibniz formula for the determinant, and the permanent itself satisfies \operatorname{per}(A) \leq \prod_{i=1}^n \sum_{j=1}^n |a_{ij}|. Permanental bounds highlight the connection between the determinant and the permanent, with |\det(A)| \leq \operatorname{per}(A) holding due to the alternating signs in the determinant's being bounded in magnitude by ignoring them. For example, consider the matrix A = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}. Its determinant is \det(A) = -2, so |\det(A)| = 2. The norms of the columns are both \sqrt{2}, and their product is 2, attaining equality in since the columns are orthogonal (their is zero). For orthogonal matrices Q, where Q^\top Q = I, the property \det(Q^\top Q) = \det(I) = 1 implies [\det(Q)]^2 = 1, so |\det(Q)| = 1. In the context of positive definite matrices, which have all positive eigenvalues, the determinant equals the product of these eigenvalues and is thus positive.

Applications

Cramer's rule for systems of equations

Cramer's rule provides an explicit formula for solving a system of linear equations using determinants. Named after the Swiss mathematician Gabriel Cramer, who first published the general form for an arbitrary number of unknowns in 1750, the rule expresses the solution components directly in terms of ratios of determinants. Consider an n \times n system of linear equations A \mathbf{x} = \mathbf{b}, where A is the coefficient matrix, \mathbf{x} is the vector of unknowns, and \mathbf{b} is the constant vector. If \det(A) \neq 0, the system has a unique solution given by x_i = \frac{\det(A_i)}{\det(A)}, for i = 1, 2, \dots, n, where A_i is the matrix obtained by replacing the i-th column of A with \mathbf{b}. This requires the matrix A to be square and invertible, ensuring the denominator is nonzero. For illustration, solve the $2 \times 2 system \begin{cases} 2x + y = 5, \\ x + y = 3. \end{cases} Here, A = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}, \mathbf{b} = \begin{pmatrix} 5 \\ 3 \end{pmatrix}, so \det(A) = 2 \cdot 1 - 1 \cdot 1 = 1. Then, A_x = \begin{pmatrix} 5 & 1 \\ 3 & 1 \end{pmatrix}, \quad \det(A_x) = 5 \cdot 1 - 1 \cdot 3 = 2, yielding x = 2/1 = 2. Similarly, A_y = \begin{pmatrix} 2 & 5 \\ 1 & 3 \end{pmatrix}, \quad \det(A_y) = 2 \cdot 3 - 5 \cdot 1 = 1, so y = 1/1 = 1. This confirms the solution (x, y) = (2, 1). Naively implemented, requires computing n+1 determinants, each of order n, leading to a of O(n \cdot n!), which renders it impractical for large n despite its theoretical elegance.

Linear independence and basis orientation

In linear algebra, a set of n vectors \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n in \mathbb{R}^n forms a basis they are , which can be tested by forming the A with these vectors as columns and computing its determinant: the vectors are linearly independent if \det(A) \neq 0, and linearly dependent otherwise. This criterion arises because a zero determinant implies A is singular, meaning the columns satisfy a nontrivial A \mathbf{x} = \mathbf{0} for some \mathbf{x} \neq \mathbf{0}. Geometrically, a nonzero determinant corresponds to the vectors spanning a parallelepiped of positive volume, confirming their independence. The sign of the determinant further reveals the of the basis relative to the . If \det(A) > 0, the basis preserves the standard (even ); if \det(A) < 0, it reverses it (odd ). This property distinguishes oriented bases in vector spaces, essential for concepts like handedness in \mathbb{R}^3. For example, consider the vectors \mathbf{v}_1 = (1, 0, 0), \mathbf{v}_2 = (0, 1, 0), and \mathbf{v}_3 = (1, 1, 1) in \mathbb{R}^3. The matrix is A = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}. Its determinant is \det(A) = 1 \neq 0, so the vectors are linearly independent and form a basis; the positive sign indicates the same orientation as the standard basis. In inner product spaces, the Gram determinant provides an alternative test for linear independence without directly forming the coordinate matrix. The Gram matrix G has entries G_{ij} = \langle \mathbf{v}_i, \mathbf{v}_j \rangle, and the vectors are linearly independent if and only if \det(G) > 0 (for positive definite inner products, ensuring G is positive definite). This determinant equals the square of the volume of the spanned by the vectors, reinforcing independence when nonzero.

Jacobian determinant in multivariable calculus

In multivariable calculus, the Jacobian matrix of a differentiable map f: \mathbb{R}^n \to \mathbb{R}^n, denoted J_f, is the n \times n whose entries are the partial derivatives (J_f)_{ij} = \frac{\partial f_i}{\partial x_j}. The Jacobian determinant, \det(J_f), quantifies the local scaling factor of volumes under the induced by f. A primary application of the Jacobian determinant arises in the formula for multiple . For a continuously differentiable f: S \to R with f orientation-preserving, the transforms as \int_R g(x) \, dx = \int_S g(f(u)) |\det(J_f(u))| \, du, where the accounts for scaling regardless of orientation reversal. This formula generalizes the rule from single-variable , enabling evaluation of in more convenient coordinates. The relies on the determinant to establish local invertibility. If f is continuously differentiable and \det(J_f(a)) \neq 0 at a point a \in \mathbb{R}^n, then there exist neighborhoods U around a and V = f(U) around f(a) such that f restricts to a from U to V, with continuous inverse. The condition \det(J_f(a)) \neq 0 ensures the at a is invertible, guaranteeing the local bijectivity of f. A classic example is the transformation to polar coordinates in the plane, where f(r, \theta) = (r \cos \theta, r \sin \theta) for r > 0, \theta \in [0, 2\pi). The Jacobian matrix is J_f = \begin{pmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta \end{pmatrix}, with \det(J_f) = r. Thus, the area element changes as dx \, dy = r \, dr \, d\theta, simplifying integrals over circular regions, such as \iint_R dx \, dy = \int_0^{2\pi} \int_0^a r \, dr \, d\theta = \pi a^2. The sign of the Jacobian determinant also determines orientation preservation. If \det(J_f(p)) > 0 at a point p, the f locally preserves near p, mapping right-handed bases to right-handed bases; a negative determinant reverses orientation. This property is crucial for consistent volume interpretations in integrals and geometric applications.

Algebraic Foundations

Determinants of endomorphisms

In the context of linear algebra over a field, the determinant of an T: V \to V, where V is a finite-dimensional , is defined by selecting any basis for V and computing the determinant of representation of T with respect to that basis; this value is independent of the choice of basis. This basis independence arises because the determinant measures the scaling factor of the induced action on the top exterior power of V, which is a invariant under basis changes. Key properties of the determinant for endomorphisms include multiplicativity and the value on the identity map. Specifically, for any endomorphisms T, S: V \to V, \det(T \circ S) = \det(T) \det(S), reflecting the compatibility of determinants with . Additionally, \det(\mathrm{id}_V) = 1, as the identity matrix in any basis has determinant 1. Under a represented by an P, the matrix of T transforms to P^{-1} [T] P, but the determinant remains unchanged: \det(P^{-1} [T] P) = \det([T]). This invariance underscores the determinant as an intrinsic property of the itself, rather than its coordinate representation. A representative example is the on \mathbb{R}^2 by an angle \theta, with matrix \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}, which has determinant \cos^2 \theta + \sin^2 \theta = 1. This positive determinant indicates that the preserves , distinguishing it from reflections.

Matrices over commutative rings

The determinant of an n \times n A = (a_{ij}) with entries in a R with identity is defined using the Leibniz formula: \det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}, where S_n is the on n letters and \operatorname{sgn}(\sigma) is the sign of the \sigma. This expression yields an element of R, and the formula is well-defined due to the commutativity of R, which ensures that the products are unambiguous. This definition coincides with the axiomatic characterization of the determinant as the unique alternating on R^n normalized by \det(I) = 1, extended from fields to commutative rings. The standard properties of the determinant over fields, such as multilinearity in the rows, alternation under row swaps, and multiplicativity \det(AB) = \det(A) \det(B) for any square matrices A, B \in M_n(R), hold over commutative rings with identity. Multiplicativity follows from the Leibniz formula or the axiomatic properties and does not require R to be an . For example, over the \mathbb{Z}, the matrix \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} has determinant \det = 2 \cdot 3 - 0 \cdot 0 = 6 \in \mathbb{Z}. A square matrix A \in M_n(R) is invertible over R \det(A) is a in R. In particular, if \det(A) = 0, then A is singular (non-invertible), as 0 is never a unit. The \operatorname{adj}(A), defined entrywise via (n-1) \times (n-1) minors with signs, satisfies the relation A \cdot \operatorname{adj}(A) = \operatorname{adj}(A) \cdot A = \det(A) I_n, which holds over commutative rings. However, the is given by A^{-1} = \det(A)^{-1} \operatorname{adj}(A) only when \det(A) is invertible in R. In rings with zero-divisors, additional challenges arise: a nonzero determinant may still fail to be a unit (e.g., \det = 6 in \mathbb{Z}), rendering the matrix non-invertible, while zero-divisors in \det(A) complicate linear dependence and singularity interpretations beyond the field case. The adjugate may exist but cannot always be used to "invert" via scalar multiplication if \det(A) is not a unit, limiting applications like to cases where minors yield units.

Exterior algebra construction

The determinant of a linear T: V \to V on an n-dimensional V over a F can be constructed using the of V. Specifically, the nth exterior power \wedge^n V is a one-dimensional over F, and T induces a \wedge^n T: \wedge^n V \to \wedge^n V. Since \dim(\wedge^n V) = 1, this induced map is multiplication by a scalar \det(T) \in F, which defines the determinant of T. To see this explicitly, let \{e_1, \dots, e_n\} be a basis for V. Then \{e_1 \wedge \dots \wedge e_n\} forms a basis for \wedge^n V. The action of \wedge^n T on this basis element is given by \wedge^n T(e_1 \wedge \dots \wedge e_n) = T(e_1) \wedge \dots \wedge T(e_n) = \det(T) \, (e_1 \wedge \dots \wedge e_n), where \det(T) is the unique scalar satisfying this equation, determined by expressing T(e_i) in the basis and using the multilinearity and alternating properties of the wedge product. This construction inherits key properties of the determinant from the functorial nature of the exterior power functor. For instance, the multiplicativity \det(T \circ S) = \det(T) \det(S) follows directly from the functoriality (\wedge^n (T \circ S)) = (\wedge^n T) \circ (\wedge^n S), and \det(\mathrm{id}_V) = 1 arises because \wedge^n (\mathrm{id}_V) is the identity on \wedge^n V. Invertibility of T implies \det(T) \neq 0, as \wedge^n T would otherwise have a nontrivial kernel in the one-dimensional space. As an illustrative example, consider V = \mathbb{R}^2 with the \{e_1, e_2\}. Here, \wedge^2 \mathbb{R}^2 \cong \mathbb{R}, spanned by e_1 \wedge e_2, and the determinant of T measures the signed area scaling factor: if T is represented by the matrix \begin{pmatrix} a & b \\ c & d \end{pmatrix}, then T(e_1) \wedge T(e_2) = (ad - bc) (e_1 \wedge e_2), so \det(T) = ad - bc. This geometric interpretation underscores the determinant's role in preserving or scaling oriented volumes.

Advanced Topics

Berezin integral in supersymmetry

The provides a formalism for integrating over anticommuting Grassmann variables, which are essential in to describe fermionic . For a single Grassmann variable \theta satisfying \theta^2 = 0, the is defined by the rules \int d\theta \cdot 1 = 0 and \int d\theta \cdot \theta = 1, where the integral acts as a linear functional extracting the of the highest-degree in the of the integrand. This definition extends naturally to multiple Grassmann variables \theta^1, \dots, \theta^n, where \int d\theta^1 \dots d\theta^n f(\theta) yields the coefficient of the monomial \theta^1 \dots \theta^n in the Taylor expansion of f, ensuring anticommutativity under variable exchange. In supersymmetry, the Berezin integral is employed in path integrals over superspace, which combines bosonic (commuting) and fermionic (anticommuting) coordinates. For Gaussian forms involving fermionic variables, the integral evaluates to a power of the determinant of the quadratic form matrix. Specifically, for a fermionic Gaussian integral over Grassmann variables \eta and \theta, \int d\eta d\theta \exp(\theta^\top A \eta) = \det(A), where A is the matrix in the quadratic exponent; in the purely fermionic case without sources, it simplifies to \det(A)^{1/2} up to normalization. This contrasts with the bosonic Gaussian integral \int dx \exp(-\frac{1}{2} x^\top A x) = (2\pi)^{n/2} \det(A)^{-1/2}, highlighting how supersymmetric theories balance bosonic and fermionic contributions, often leading to cancellations in partition functions. A key role of the determinant in these integrals arises in supersymmetric quantum field theories, where fermionic fluctuations generate effective actions involving \det(A)^{1/2}. For skew-symmetric matrices A typical in Majorana fermion representations, this square root is expressed as the Pfaffian, \operatorname{Pf}(A), satisfying [\operatorname{Pf}(A)]^2 = \det(A). The Pfaffian itself can be represented as a Berezin integral: \operatorname{Pf}(A) = \int d\theta \exp(-\frac{1}{2} \theta^\top A \theta), providing a direct link between fermionic integration and matrix invariants in supersymmetric models. As a simple example, consider a one-dimensional fermionic with two real Grassmann variables \theta_1 and \theta_2, where the is governed by a 2×2 A = \begin{pmatrix} 0 & a \\ -a & 0 \end{pmatrix} with a > 0. The Gaussian \int d\theta_1 d\theta_2 \exp(-\frac{1}{2} \theta^\top A \theta) = a = \det(A)^{1/2}, illustrating how the yields the of the determinant, which encodes the fermionic contribution in supersymmetric functions.

Determinants for finite-dimensional algebras

In the context of finite-dimensional over a k, the determinant is defined via the of the A, which views A as a left over itself with by left . This induces a faithful A \hookrightarrow \operatorname{End}_k(A), where \operatorname{End}_k(A) is the of k-linear endomorphisms of the A of n = \dim_k A. For an a \in A, the left map L_a: A \to A, x \mapsto a x, is an element of \operatorname{End}_k(A) \cong M_n(k), and the determinant \det(A/k)(a) := \det(L_a) is the usual matrix determinant of L_a with respect to a basis of A. This yields a \det(A/k): A \to k that is a from the multiplicative of A to the additive group of k, vanishing on non-invertible elements and compatible with base change along flat s. For semisimple finite-dimensional algebras, the Artin–Wedderburn theorem provides a decomposition that elucidates the and . The theorem states that such an algebra A over an is isomorphic to a \prod_{i=1}^r M_{n_i}(k), where each M_{n_i}(k) is the full algebra of degree n_i. The then decomposes accordingly, with the \operatorname{tr}(a) := \operatorname{tr}(L_a) being the sum of the traces on each block, and the determinant factoring as the product \det(A/k)(a) = \prod_{i=1}^r \det(M_{n_i}(k))(a_i)^{m_i}, where a = (a_1, \dots, a_r) in the product decomposition and m_i accounts for the multiplicity in the module structure. This structure preserves multiplicativity and allows computation of invariants like the of A, defined as the determinant of the form A \times A \to k, (a,b) \mapsto \operatorname{tr}(a b). A prominent example arises in the group algebra A = \mathbb{C}[G] for a G, where \dim A = |G| and the is the left regular action on \mathbb{C}[G]. For an \gamma = \sum_{g \in G} c_g g \in \mathbb{C}[G], the determinant \det(L_\gamma) equals \prod_{\rho} \left[ \det(\rho(\gamma)) \right]^{\dim \rho}, where the product runs over all irreducible representations \rho of G (up to ) and \dim \rho is the of \rho. In particular, for a group g \in G, this simplifies to \det(L_g) = \prod_{\rho} \left[ \det(\rho(g)) \right]^{\dim \rho}, where each \det(\rho(g)) is a encoding the action on the top exterior power of \rho. This formula connects the determinant to the character table of G, as the group determinant (the polynomial \det((X_{gh^{-1}})_{g,h \in G})) factors into linear terms over the irreducible characters for abelian G, and more generally reflects the representation-theoretic decomposition. These determinants in finite-dimensional algebras, particularly for group algebras, have brief connections to ; for instance, the group determinant was historically used by Dedekind to compute resolvents and discriminants in algebraic number fields, influencing early analytic methods akin to those in modular forms for class number formulas.

Generalizations

Infinite-dimensional matrices

In infinite-dimensional s, determinants are generalized through the concept of the Fredholm determinant, which applies to s of the form I + K, where I is the identity and K is a trace-class . A trace-class K on a separable H satisfies \|K\|_1 = \operatorname{tr}(|K|) < \infty, where |K| = \sqrt{K^* K} and the trace is the sum of the singular values of K. This framework extends the finite-dimensional determinant to handle perturbations of the identity by compact s with summable singular values. The Fredholm determinant is defined as \det(I + K) = \prod_{n=1}^\infty (1 + \lambda_n(K)), where \{\lambda_n(K)\}_{n=1}^\infty are the eigenvalues of K counted with multiplicity and ordered by decreasing modulus, ensuring the infinite product converges absolutely due to the trace-class condition \sum_{n=1}^\infty |\lambda_n(K)| < \infty. Equivalently, it can be expressed as \det(I + K) = \exp(\operatorname{tr} \log(I + K)), where the logarithm is well-defined for I + K invertible and the trace exists because \log(I + K) is also trace-class. This exponential-trace form arises from the series expansion \log(I + K) = \sum_{n=1}^\infty (-1)^{n+1} \frac{K^n}{n}, which converges in trace norm for \|K\|_1 < 1 and extends analytically. Key properties include multiplicativity: for trace-class operators A and B, \det(I + A + B + AB) = \det(I + A) \det(I + B), mirroring the finite-dimensional case. The determinant is entire analytic as a function of the trace norm on the space of trace-class operators, meaning it is holomorphic in the open unit ball and extends continuously to the boundary. Additionally, \det(I + K) \neq 0 if and only if I + K is invertible, providing a criterion for the zero in this setting. Continuity holds under trace-norm convergence: if K_n \to K in \| \cdot \|_1, then \det(I + K_n) \to \det(I + K). A representative example is the integral operator K on L^2[0,1] with kernel K(x,y) = \min(x,y)(1 - \max(x,y)), arising in the Green's function for . This operator is diagonalized by the , with eigenvalues \lambda_n(K) = 1/(\pi^2 n^2), which are summable since \sum 1/n^2 < \infty, confirming trace-class membership. The is then \det(I - z K) = \sin(\sqrt{z})/\sqrt{z}, an explicit closed form illustrating convergence of the product \prod_{n=1}^\infty (1 - z/(\pi^2 n^2)). In quantum mechanics, Fredholm determinants play a crucial role in scattering theory, particularly for analyzing resonances and the inverse scattering problem for Schrödinger operators on the line. For instance, they encode transmission coefficients and phase shifts in one-dimensional potentials, connecting spectral properties to scattering data via determinants of trace-class perturbations of the free resolvent.

Determinants in operator algebras

In operator algebras, particularly within the framework of von Neumann algebras, determinants are generalized to accommodate infinite-dimensional settings and non-commutative structures, providing tools to measure the "size" or invertibility properties of operators in a trace-preserving manner. These regularized determinants extend classical notions from finite matrices to , where the usual determinant would diverge due to the absence of a finite-dimensional trace. The primary construction is the , originally developed for finite von Neumann algebras, which leverages the unique trace to define a multiplicative functional on the group of invertible operators. In type II_1 factors, a semifinite von Neumann algebra equipped with a faithful normal trace \tau, the Fuglede-Kadison determinant for an invertible operator T is defined as \det_2(T) = \exp\left(\tau(\log |T|)\right), where |T| = \sqrt{T^* T} is the absolute value of T, and \log |T| is the spectral logarithm, which is well-defined and trace-class for suitable operators. This determinant is multiplicative, \det_2(ST) = \det_2(S) \det_2(T) for compatible invertibles S and T, and reduces to the classical determinant in finite-dimensional commutative cases. For projections p and q in such algebras, the determinant aligns with Murray-von Neumann equivalence: p \sim q if and only if \tau(p) = \tau(q), with \det_2(p) = \tau(p) serving as the dimension function that quantifies their equivalence class. A concrete example arises in the abelian von Neumann algebra L^\infty[0,1] acting on L^2[0,1] with Lebesgue trace \tau(f) = \int_0^1 f(x) \, dx. For a multiplication operator T_f by an invertible function f \in L^\infty[0,1], the Fuglede-Kadison determinant simplifies to \det_2(f) = \exp\left( \int_0^1 \log |f(x)| \, dx \right), which is the exponential of the integral of the logarithm, analogous to the geometric mean of |f|. These determinants play a crucial role in index theory and algebraic K-theory of operator algebras. In K-theory, the Fuglede-Kadison determinant computes the K_1-group of finite von Neumann algebras, relating to Whitehead torsion and the structure of unitary groups. Connections to index theory emerge through L^2-torsion invariants in topology, where the determinant provides analytic tools for computing indices of elliptic operators on manifolds with group actions, bridging operator-theoretic and geometric perspectives.

Non-commutative generalizations

In non-commutative algebra, the determinant requires generalization to handle matrices over division rings (also known as skew fields), where elements do not necessarily commute. The primary such generalization is the , introduced by in 1943, which extends the classical determinant to these settings. For a division ring K and n \times n matrices over K, the \Delta: \mathrm{GL}_n(K) \to K^\times / [K^\times, K^\times] is defined as a surjective group homomorphism to the abelianization of the multiplicative group of K, coinciding with the usual determinant when K is commutative. This determinant is multiplicative, satisfying \Delta(AB) = \Delta(A) \Delta(B) for A, B \in \mathrm{GL}_n(K), but its image lies in the quotient by the commutator subgroup, so it is not always a scalar in the center of K. For singular matrices, \Delta(A) = 0. The kernel is the special linear group \mathrm{SL}_n(K) for n \geq 2, reflecting the structure of the general linear group over division rings. In contrast to commutative cases, the absolute value |\Delta| often provides a norm-like function, mapping to positive elements in the center. A concrete example arises with the quaternions \mathbb{H} over the reals \mathbb{R}, a non-commutative division ring. Here, the Dieudonné determinant \Delta: \mathrm{GL}_n(\mathbb{H}) \to \mathbb{R}_{>0} factors through the isomorphism \mathbb{H}^\times / [\mathbb{H}^\times, \mathbb{H}^\times] \cong \mathbb{R}_{>0}, yielding a positive real number via the reduced norm \mathrm{Nr}_{\mathbb{H}/\mathbb{R}}, which for quaternions q = a + bi + cj + dk is \mathrm{Nr}(q) = a^2 + b^2 + c^2 + d^2. For matrices, this reduced norm composition ensures multiplicativity and positivity, with \Delta(I) = 1 and \Delta zero for non-invertible matrices; for instance, the Study determinant, a related variant, equals the fourth power of the Dieudonné determinant in this context. Non-commutativity also manifests in polynomial identities satisfied by matrix algebras over division rings, notably the Amitsur–Levitzki theorem (1950), which asserts that the n \times n matrix algebra over a satisfies the standard of $2n, the minimal such . This , \sum_{\sigma \in S_{2n}} \mathrm{sgn}(\sigma) x_{\sigma(1)} \cdots x_{\sigma(2n)} = 0 when evaluated on matrices, underscores the non-commutative structure and provides a foundation for bounding identities in generalizations of determinants, influencing computations and structural theorems in these algebras.

Computation

Gaussian elimination

Gaussian elimination is a standard algorithmic approach to compute the determinant of an n×n A by performing row operations to transform it into an upper U, after which the determinant is the product of the diagonal entries of U, adjusted by the sign from any row interchanges. The elementary row operations preserve the determinant up to a known : adding a multiple of one row to another leaves it unchanged, scaling a row by a nonzero scalar k multiplies it by k, and interchanging two rows multiplies it by -1. Thus, if s denotes the number of row interchanges, then \det(A) = (-1)^s \prod_{i=1}^n u_{ii}. To ensure the process does not encounter zero pivots, which would halt elimination, partial pivoting is typically incorporated. For each position k, the rows from k to n are examined, and the row with the entry of largest in column k is swapped into the k-th row to serve as the , promoting both and progress. Each such swap increments the interchange count s, thereby affecting the sign of the determinant. As an illustrative example, consider the 3×3 A = \begin{pmatrix} 0 & 2 & 0 \\ 3 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}. The (1,1) entry is zero, so partial pivoting identifies the largest in the first column below the diagonal as 3 in row 2 and swaps rows 1 and 2 (s=1), yielding \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix}. This is already upper triangular, with diagonal entries 3, 2, and 1. Therefore, \det(A) = (-1)^1 \cdot 3 \cdot 2 \cdot 1 = -6. The computational efficiency of this method arises from its cubic scaling: performing on an n×n requires approximately \frac{2}{3}n^3 operations, yielding an overall of O(n^3), identical to that of inversion via the same technique.

Decomposition methods

methods provide efficient ways to compute the determinant of a square by factoring it into triangular or other structured forms where the determinant simplifies to a product of diagonal elements or related quantities. These techniques are particularly useful in for avoiding direct expansion or row reduction in isolation, often integrating with elimination processes as a preprocessing step. The , also known as lower-upper factorization, expresses a square A as the product of a lower L with unit diagonal entries and an upper U, such that A = LU. The determinant of A is then \det(A) = \det(L) \det(U). Since L has 1s on its diagonal, \det(L) = 1, so \det(A) = \prod_{i=1}^n u_{ii}, the product of the diagonal elements of U. This factorization can be computed via with partial pivoting, though pivoting introduces a P such that PA = LU, and the determinant adjusts by the sign of the : \det(A) = \det(P) \prod_{i=1}^n u_{ii}. The computational cost is approximately \frac{2}{3}n^3 floating-point operations for an n \times n , making it suitable for dense matrices. For illustration, consider the 2×2 matrix A = \begin{pmatrix} 2 & 1 \\ 4 & 3 \end{pmatrix}. The yields L = \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} and U = \begin{pmatrix} 2 & 1 \\ 0 & 1 \end{pmatrix}, so \det(A) = 2 \times 1 = 2. Direct confirms \det(A) = 2 \cdot 3 - 1 \cdot 4 = 2, verifying the result. The factors A into an Q and an upper R, such that A = QR. Here, \det(A) = \det(Q) \det(R). Since Q is orthogonal, \det(Q) = \pm 1, and \det(R) = \prod_{i=1}^n r_{ii} (assuming R has no zero rows for nonsingular A), so \det(A) = \pm \prod_{i=1}^n r_{ii}. The sign depends on the number of reflections in the Householder transformations typically used for . This method is stable for ill-conditioned matrices and costs about $2n^3 operations, often preferred when aids in further numerical tasks. For symmetric positive definite matrices, the Cholesky decomposition offers a specialized A = LL^T, where L is lower triangular with positive diagonal entries. The determinant is \det(A) = \det(L) \det(L^T) = [\det(L)]^2 = \left( \prod_{i=1}^n l_{ii} \right)^2. This exploits the to ensure real, positive diagonals and requires roughly half the operations of , about \frac{1}{3}n^3, due to . It is widely used in optimization and statistical applications involving matrices.

Specialized algorithms

Specialized algorithms exploit the structure of particular matrix classes to compute determinants more efficiently than general-purpose methods, achieving reduced time complexity for matrices like tridiagonal, Toeplitz, or Vandermonde forms. These approaches leverage recurrences or closed-form expressions inherent to the matrix's banded or patterned entries, often attaining linear or quadratic scaling in matrix dimension n. For tridiagonal matrices, where non-zero entries are confined to the main diagonal and the adjacent sub- and super-diagonals, the determinant satisfies a linear recurrence relation that enables computation in O(n) time. Let A_n denote an n \times n tridiagonal matrix with diagonal entries a_1, \dots, a_n, subdiagonal entries b_1, \dots, b_{n-1}, and superdiagonal entries c_1, \dots, c_{n-1}. Define d_k = \det(A_k) for k = 0, 1, \dots, n, with d_0 = 1 and d_1 = a_1. Then, the recurrence is d_k = a_k d_{k-1} - b_{k-1} c_{k-1} d_{k-2}, \quad k = 2, \dots, n, and \det(A_n) = d_n. This method avoids full matrix factorization and is numerically stable for well-conditioned cases, as implemented in hybrid algorithms combining recurrence with error checks. Toeplitz matrices, characterized by constant values along each diagonal, admit an O(n^2) algorithm via the Levinson-Durbin recursion, originally developed for solving Yule-Walker equations in autoregressive modeling. For a symmetric positive definite T_n with first row [r_0, r_1, \dots, r_{n-1}] where r_0 > 0 and |r_k| < r_0, the algorithm computes the Cholesky T_n = L D L^T, where L is lower triangular and D is diagonal. The determinant is then \det(T_n) = \prod_{k=1}^n d_{kk}, with the diagonal entries d_{kk} obtained recursively alongside the reflection coefficients. This exploits the Toeplitz structure through order-recursion updates, reducing operations from O(n^3) to O(n^2). A prominent example is the V with entries v_{ij} = x_i^{j-1} for distinct scalars x_1, \dots, x_n. Its determinant has a closed-form product expression: \det(V) = \prod_{1 \leq i < j \leq n} (x_j - x_i), computable directly in O(n^2) time by iterating over pairs, bypassing elimination entirely due to the explicit factorization from properties. This formula underscores the matrix's role in uniqueness proofs for interpolants. In environments, specialized algorithms adapt decomposition techniques for distributed architectures. Block LU factorization partitions the matrix into subblocks processed concurrently across processors, with each step involving local factorizations and broadcasts of Schur complements; the determinant is the product of the diagonal entries of the resulting upper triangular factor. This scales to O(n^2 / p) per processor for p processors on dense matrices, as in scalable implementations for . Distributed extends this by row-wise partitioning and pipelined eliminations over a , maintaining the O(n^3 / p) overall scaling while computing the determinant via pivoted triangularization. Such methods are integral to libraries like ScaLAPACK for large-scale linear algebra.

Historical Development

Early origins

Precursors to determinants appear in ancient . The Nine Chapters on the Mathematical Art (c. 200–100 BC) describes methods for solving systems of linear equations using array manipulations akin to . The concept of the determinant emerged in the 16th century through efforts to solve cubic equations, where Italian mathematician implicitly employed determinant-like ratios in his 1545 treatise Ars Magna. Cardano's regula de modo provided a method for handling proportions in systems of linear equations arising from cubic solutions, laying foundational groundwork for later explicit formulations. In the late 17th century, mathematician (1642–1708), also known as Seki Kowa, independently developed an elimination technique for simultaneous equations that incorporated the determinant as a unique scalar value. His 1683 manuscript Kai-fukudai no hō described properties of 2×2 determinants and their role in solving quadratic systems, predating parallel European work by a decade. Seki's approach synthesized earlier Chinese methods of array manipulation and emphasized expansion rules akin to modern Laplace expansions. Around the same period, German mathematician independently formulated the determinant concept between 1678 and , driven by the need to resolve systems of linear equations. In a letter to , Leibniz introduced the term "determinans" for the 2×2 case, explaining its computation as the difference of products of coefficients and highlighting its utility in elimination theory. His work extended to 3×3 arrays, motivated by geometric volumes like those of parallelepipeds. The saw further consolidation of these ideas in . Swiss mathematician Gabriel Cramer articulated a general rule in his 1750 book Introduction à l'analyse des lignes courbes algébriques, stating that solutions to n linear equations could be found by ratios of n×n determinants, though he provided no full proof. Italian-French mathematician advanced the theory in 1773 through studies in arithmetic and elimination, proving properties such as the effect of row operations on determinants and their invariance under certain transformations.

Key contributors and milestones

In 1772, introduced the expansion of a determinant along a row or column using signed minors, known today as the , which provided a recursive method for computing determinants of square matrices. This milestone formalized an approach that built on earlier rudimentary calculations, such as those for 3×3 matrices, and became a cornerstone for later proofs. Augustin-Louis Cauchy advanced the theory significantly in 1812 by introducing the term "determinant" in its modern sense, proving the multiplicative property det(AB) = det(A) det(B), and developing the theory of minors and adjoints. published an algorithmic definition of the determinant using sums over permutations in 1841, making the concept widely known, and contributed to functional determinants, later called Jacobians. extended applications in the mid-19th century by linking determinants to resultants, introducing the matrix in 1840 to determine common roots of polynomials through its determinant. established the Hadamard inequality in 1893, providing an upper bound on the absolute value of a determinant in terms of the Euclidean norms of its rows, with implications for volume and . In the 20th century, offered an axiomatic perspective on determinants through multilinear alternating forms in , emphasizing their geometric interpretation as oriented volumes. , during the 1930s, developed the framework of rings of operators on Hilbert spaces, laying groundwork for generalizations of determinants to infinite-dimensional settings in operator algebras. Advancements in numerical aspects emerged with J. H. Wilkinson's work in the 1960s, which analyzed rounding errors and stability in determinant computations using , ensuring reliable results in finite-precision arithmetic. Felix Berezin introduced superdeterminants in 1966 as a generalization for supermatrices in the context of , accommodating even and odd dimensions in supersymmetric theories.

References

  1. [1]
    Determinants: Definition
    By the invertibility property, a matrix that does not satisfy any of the properties of the invertible matrix theorem in Section 3.6 has zero determinant.
  2. [2]
    None
    ### Formal Definition of the Determinant
  3. [3]
    [PDF] A Brief History of Linear Algebra - University of Utah Math Dept.
    The emergence of the subject came from determinants, values connected to a square matrix, studied by the founder of calculus, Leibnitz, in the late 17th century ...
  4. [4]
    None
    ### Key Properties of Determinants
  5. [5]
    Geometric properties of the determinant - Math Insight
    ### Summary of Geometric Interpretation of the Determinant
  6. [6]
    [PDF] Determining the Determinant - Ursinus Digital Commons
    Jun 12, 2018 · It took shape within the development of linear algebra, which in the 1800s evolved as a branch of a more general algebraic theory of equations ...
  7. [7]
    [PDF] Tutorial on Linear Algebra
    The determinant is a value that can be computed for a square matrix. For a 2x2 matrix it is given by. 𝑎 𝑏. 𝑐 𝑑. = 𝑎𝑑 − 𝑏𝑐. Interpretation: volume ...
  8. [8]
    6.4 - The Determinant of a Square Matrix
    The determinant of a 2×2 matrix is found much like a pivot operation. It is the product of the elements on the main diagonal minus the product of the elements ...Missing: linear | Show results with:linear
  9. [9]
    [PDF] Chapter 4 Determinants
    Recall from Section 2 that for a 2 × 2 matrix det. a b. c d. = ad−bc. Recall also that det. a b. c d was the signed area of the parallelogram determined by the ...
  10. [10]
    Determinants - Properties
    In this section, we'll derive some properties of determinants. Two key results: The determinant of a matrix is equal to the determinant of its transpose, and ...
  11. [11]
    [PDF] Multilinearity of the Determinant. Professor Karen Smith A. Theorem
    It is an interesting Theorem that the determinant is the ONLY alternating multilinear function of the columns of an n × n matrix which takes the value 1 on ...
  12. [12]
    Determinants and Volumes
    For a 2 × 2 matrix with rows v 1 , v 2 , the sign of the determinant determines whether v 2 is counterclockwise or clockwise from v 1 . That is, if the ...Missing: 2x2 | Show results with:2x2
  13. [13]
    [PDF] STATICS DYNAMICS - Andy Ruina
    Aug 21, 2013 · ... 2D cross product. Although the cross product is fundamentally a three-dimensional idea, we start with the two-dimensional version. The 2D ...<|control11|><|separator|>
  14. [14]
    [PDF] determinants
    determinant exactly is the volume of the parallelepiped described above. We ... signed volume sat- isfies the four properties of a determinant function.
  15. [15]
    [PDF] Determinants Math 130 Linear Algebra
    Also, the determinant tells what the transformation de- scribed by A ... The signed volume of this parallelepiped is. Volume = u1 v1 w1 u2 v2 w2 u3 v3.
  16. [16]
    [PDF] determinants and eigenvalues
    (u × v) · w = u · (v × w) gives the signed volume of the parallelepiped spanned by the three vectors. ... determinant of an n × n determinant. We also ...
  17. [17]
    [PDF] Chapter 5 Determinants - UPenn CIS
    its determinant is denoted by D(A) or det(A), or more explicitly by det(A) ... If n = 2, then P2 is a parallelogram and if n = 3, then. P3 is a parallelepiped, a ...
  18. [18]
    [PDF] 18.06.23: Determinants & Permutations - MIT
    Jun 18, 2023 · 18.06.23: Determinants & Permutations det(𝐴) = ∑. 𝜎∈𝛴𝑛 sgn(𝜎)(. 𝑛. ∏. 𝑖=1. 𝑎𝜎(𝑖),𝑖). There it is – the Leibniz formula for the determinant.Missing: textbook | Show results with:textbook
  19. [19]
    [PDF] Determinants : brief history, geometric interpretation, properties ...
    May 26, 2017 · Both Seki Kowa and Leibniz not only discovered the idea of determinant as unique scalar, but also knew many elementary properties of.
  20. [20]
    [PDF] DETERMINANTS 1. Introduction In these notes we discuss a simple ...
    The Leibniz formula for the determinant of an n × n matrix A is. (1) det(A) = X σ∈Sn sgn (σ) n. Y i=1. Aiσ(i), where Sn is the set of all permutations of the ...<|control11|><|separator|>
  21. [21]
  22. [22]
    [PDF] determinants and invertibility, transposes, minors and cofactors
    Oct 17, 2016 · In particular, we have the following corollary. Theorem. The determinant is also a multilinear, alternating function of the columns of a matrix.
  23. [23]
    [PDF] notes on linear algebra - IITB Math
    It follows from the theorem above that the determinant function is multilinear, alternating, and normalized with respect to the columns. We also have the ...
  24. [24]
    [PDF] determinants.pdf
    In other words, there exists a unique (up to scalar multiplication) alternating n-form on Rn. At the risk of spoiling the surprise, I will tell you right ...
  25. [25]
    Determinants - Axioms
    Determinants are functions which take matrices as inputs and produce numbers. They are of enormous importance in linear algebra, but perhaps you've also seen ...Missing: normalization | Show results with:normalization
  26. [26]
    [PDF] Math 201 lecture for Friday, Week 7
    Definition. The determinant is a multilinear, alternating function of the rows of square matrix, normalized so that its value on the identity matrix is 1.
  27. [27]
    [PDF] Determinant (Theory) - UT Math
    A volume form is an alternating multilinear n-form. Theorem. All volume forms on V are proportional to each other. Proof. Let ω be a volume form on V . Let ...
  28. [28]
    [PDF] Determinants - MIT
    Sep 7, 2017 · Since the determinant is ± the product of the pivots, we get zero if and only if there is a zero pivot, corresponding to a singular matrix. 4.6 ...
  29. [29]
    [PDF] Linear Algebra
    Thus, the determinant can be computed in this three-step way (Step 1) for each permutation matrix, multiply together the entries from the original matrix where ...
  30. [30]
    [PDF] Multilinearity of Determinants Professor Karen Smith A. Let V
    The final zero is because a matrix with dependent columns always has determinant zero. 5. Prove that the determinant function of an n × n matrix is linear in ...
  31. [31]
    [PDF] The General Linear Group
    Feb 18, 2005 · The determinant function, det : GLn(F) → F× is a homomorphism; it maps the identity matrix to 1, and it is mul- tiplicative, as desired. We ...
  32. [32]
    [PDF] Proof of Multiplicative Property of Determinant Professor Karen E ...
    Two of the most important theorems about determinants are yet to be proved: Theorem 1: If A and B are both n × n matrices, then detAdetB = det(AB).
  33. [33]
    Determinant Expansion by Minors -- from Wolfram MathWorld
    Also known as "Laplacian" determinant expansion by minors, expansion by minors is a technique for computing the determinant of a given square matrix M.Missing: definition | Show results with:definition
  34. [34]
    Determinants | SpringerLink
    1 The Laplace Expansion. The French mathematician, Pierre Simon Laplace (1749–1827), invented a general method for computing the determinant of any size matrix.
  35. [35]
  36. [36]
  37. [37]
    [PDF] Determinant and the Adjugate
    The adjugate of a matrix and its relation to the matrix inverse. Given an n × n matrix A = [aij], the cofactor of aij, which is denoted by Cij, was defined in ...
  38. [38]
    [PDF] Determinants - Brandeis
    Using our work with 3 x 3 matrices as a guide, we use Laplace expansion to define the determinant of an n x n matrix. As we will soon discover, it doesn't ...
  39. [39]
    [PDF] 3.2 Determinants and Matrix Inverses
    Moreover, determinants are used to give a formula for A−1 which, in turn, yields a formula (called Cramer's rule) for the solution of any system of linear ...
  40. [40]
    Inverses and Determinants of n × n Block Matrices - MDPI
    First of all, the determinant formula for a block matrix with entries (blocks) belonging to a commutative subring of M n × n ( F ) is given. However, as matrix ...
  41. [41]
    Various proofs of Sylvester's (determinant) identity - ScienceDirect.com
    We decided to collect all the proofs we found of this identity-one in English, four in German and two in Russian, in that order-in a single paper.
  42. [42]
    Schur complement - StatLect
    Learn about Schur complements and how they are used to invert and factorize block matrices. With detailed explanations, proofs and solved exercises.
  43. [43]
    The Characteristic Polynomial
    Theorem(Eigenvalues are roots of the characteristic polynomial) Let A be an n × n matrix, and let f ( λ )= det ( A − λ I n ) be its characteristic polynomial. ...
  44. [44]
    [PDF] 5.2 The Characteristic Equation - UC Berkeley math
    If A is an n×n matrix, then det(A − λI) is a polynomial of degree n, called the characteristic polynomial of A. The (algebraic) multiplicity of an eigenvalue λ ...
  45. [45]
    [PDF] UNIVERSAL IDENTITIES, II - Keith Conrad
    ... Tr(∧k(A)) are universal polynomials in the matrix entries of A, so it ... Since the characteristic polynomial of a linear map is independent of the choice.
  46. [46]
    [PDF] Unit 14: Characteristic Polynomial
    This implies. The determinant of A is the product of the eigenvalues of A. The trace of A is the sum of the eigenvalues of A. 14.5. This gives us a new way to ...
  47. [47]
    [PDF] Jacobi's Formula for d det(B) - People @EECS
    Oct 26, 1998 · Jacobi's formula is d det(B) = Trace( Adj(B) dB ) in which Adj(B) is the Adjugate of the square matrix B and dB is its differential.
  48. [48]
    [PDF] Jacobi's formula
    Mar 14, 2013 · In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix A in terms of the adjugate of A and the derivative ...
  49. [49]
    [PDF] The Exponential Map, Lie Groups, and Lie Algebras - CIS UPenn
    The Lie algebra can be considered as a linearization of the Lie group (near the identity element), and the exponential map provides the “delinearization,” i.e. ...
  50. [50]
    Higher order derivatives and perturbation bounds for determinants
    Nov 1, 2009 · The first derivative of the determinant function is given by the well-known Jacobi's formula. We obtain three different expressions for all higher order ...
  51. [51]
    [PDF] resolution of a question on determinants - Mathscinet.ru
    (2) In Section 2, Hadamard intends that T0 be a tableau whose entries are the conjugates of the entries of T. He then multiplies the determinants to obtain Pp.
  52. [52]
    [PDF] A Survey of the Hadamard Maximal Determinant Problem - arXiv
    Nov 3, 2021 · ... product of the row sums, and that row sums are linear. For example, the inner product of the first row of W with any row from the third ...<|control11|><|separator|>
  53. [53]
    [PDF] An Algorithmic Upper Bound for Permanents via a Permanental ...
    Sep 9, 2025 · the determinant of A is exactly equal to the product of the diagonal ... product-of-row-sums bound, or when the two bounds are of the ...
  54. [54]
    [PDF] PDF - 18.701 Algebra I
    (2.4) The determinant of an orthogonal matrix is ±1. The orthogonal matrices with determinant 1 form a subgroup of On of index 2, called the special orthogonal ...
  55. [55]
    [PDF] 8.3 Positive Definite Matrices
    A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the ...
  56. [56]
    Introduction à l'analyse des lignes courbes algébriques
    Apr 3, 2016 · Introduction à l'analyse des lignes courbes algébriques. by: Cramer, Gabriel ... PDF download · download 1 file · SINGLE PAGE PROCESSED JP2 ZIP ...
  57. [57]
    [PDF] Cramer's rule, inverse matrix, and volume - MIT OpenCourseWare
    This formula helps us answer questions about how the inverse changes when the matrix changes. Cramer's Rule for x = A−1 b. We know that if Ax = b and A is ...
  58. [58]
    [PDF] 5.3 Determinants and Cramer's Rule
    This result, called Cramer's Rule for 2 × 2 systems, is usually learned in college algebra as part of determinant theory. , det(A) = a b c d . c d = ad − bc.
  59. [59]
  60. [60]
    5.1. Determinants as areas or volumes — Linear algebra
    The ordered set of two linearly independent vectors and is said to be positively orientated if det ( u , v ) > 0 , and negatively orientated if det ( u , v ) < ...
  61. [61]
    Gram matrix – Linear Algebra and Applications - Pressbooks.pub
    By construction, a Gram matrix is always symmetric, meaning that G_{ij} = G_{ji} for every pair (i,j) . It is also positive semi-definite, meaning that u ...
  62. [62]
    Calculus III - Change of Variables - Pauls Online Math Notes
    Nov 16, 2022 · The Jacobian is defined as a determinant of a 2x2 matrix, if you are unfamiliar with this that is okay. Here is how to compute the determinant.
  63. [63]
  64. [64]
    Jacobian determinant - an overview | ScienceDirect Topics
    The Jacobian of a function gives the orientation of the tangent plane to the function at a given point. Also, the Jacobian is the gradient or first ...
  65. [65]
    [PDF] Linear Algebra | DPMMS
    Jan 11, 2013 · Determinant and trace of a square matrix. Determinant of a product of two matrices and of the inverse matrix. Determinant of an endomorphism.
  66. [66]
    [PDF] Linear Algebra and It's Applications by Gilbert Strang
    Now I can describe the first part of the book, about linear equations Ax = b. The matrix A has n columns and m rows. Linear algebra moves steadily to n vectors ...
  67. [67]
    [PDF] Ch. 5 Determinants
    Definition: K a commutative ring with 1. D is a determinant function if D is n- linear, alternating and D(I)=1.
  68. [68]
    Linear algebra over commutative rings - ResearchGate
    It is well known that a square matrix A over a commutative ring S is invertible if and only if its determinant |A| is invertible in S and in this case A −1 = |A ...
  69. [69]
    [PDF] EXTERIOR POWERS 1. Introduction Let R be a commutative ring ...
    In linear algebra, exterior powers provide an algebraic mechanism for detecting linear relations among vectors and for studying the “geometry” of the subspaces ...
  70. [70]
    [PDF] Exterior Algebra and Determinants - Cornell University
    Nov 19, 2019 · Follows from the universal mapping property used in the construction of the tensor and exterior powers. 11/19/19. Page 10. Math 4330 Fall 2019.
  71. [71]
    [PDF] 4 Exterior algebra - People
    From our point of view the deter- minant is naturally defined for a linear transformation T : V → V , and what we just did was to see how to calculate it from ...
  72. [72]
    Supermanifolds, supersymmetry and berezin integration
    A theory of Berezin integration on a supermanifold S consists of a geometric class B of "integral (hyper)forms". (transforming via the Berezinian) on S and a ...
  73. [73]
    [PDF] DAMTP - 2 Supersymmetry in Zero Dimensions
    The Berezin integral will then just extract the coefficient of whatever terms where all the fermions are present, so the integral will be some polynomial.
  74. [74]
    [PDF] Pfaffians on Hilbert Space - lesniewski.us
    We base our approach on expressing the. Pfafian as a Gaussian “Berezin integral” over a Grassmann algebra [2]. Parts of this section are also similar to ...
  75. [75]
    [PDF] Determinants of finite dimensional algebras 1 Introduction
    Abstract: To each associative unitary finite-dimensional algebra over a normal base, we asso- ciate a canonical multiplicative function called its determinant.
  76. [76]
    [PDF] An Introduction to Wedderburn Theory & Group Representations
    Theorem 3.1 (Wedderburn). The algebra A is semisimple if and only if it is isomorphic with a direct sum of matrix algebras over division rings. First, observe ...
  77. [77]
    [PDF] The origin of representation theory - UConn Math Department
    Around 1886, Dedekind became interested in factoring the group determinant for non- abelian finite groups. His first discovery was that when the group is ...
  78. [78]
    [PDF] Numerical Evaluation of Fredholm Determinants - arXiv
    Hermitian kernel K and that is trace class on the Hilbert space H = L2(a, b). Assume that A is not of finite rank and let (un) be an orthonormal basis of ...
  79. [79]
    On the analyticity of the fredholm determinant - ResearchGate
    Aug 7, 2025 · We prove that the Fredholm determinant is a real analytic function on the space of trace class operators defined on a separable Hilbert ...
  80. [80]
    [PDF] Resonances in One Dimension and Fredholm Determinants - Caltech
    We discuss resonances for Schro dinger operators in whole- and half-line problems. One of our goals is to connect the Fredholm determinant approach of.
  81. [81]
    Fuglede–Kadison determinant: theme and variations - PNAS
    We review the definition of determinants for finite von Neumann algebras, due to Fuglede and Kadison [Fuglede B, Kadison R (1952) Ann Math 55:520–530],
  82. [82]
    [PDF] the quaternionic determinant - IMECC/Unicamp
    The quaternion norm |q| is defined by. |q|2 = q¯q = a2 + b2 + c2 + d2 . 2 ... Applications of the Dieudonné determinant. Lin. Alg. Appl. 1:511–536, 1968 ...
  83. [83]
    [PDF] DIEUDONNÉ'S DETERMINANTS AND STRUCTURE OF GENERAL ...
    In this partially expository article we revisit the construction of the Dieudonné de- terminant and structure of general linear groups over division rings.
  84. [84]
    Dieudonné determinant in nLab
    ### Summary of Dieudonné Determinant
  85. [85]
    [PDF] Algebras with Polynomial Identities and Computing the Determinant
    ¶ A celebrated theorem of Amitsur and Lev-. §. Indeed, there is a 1-1 correspondence between the T-ideals of FhXi and the varieties of algebras satisfying a ...
  86. [86]
    4.3 Evaluating the Determinant by Gaussian Elimination and by Row ...
    Evaluating a Determinant by Gaussian elimination: to do this you add multiples of one row to another until all entries below the main diagonal are 0.
  87. [87]
    3.4 Determinants - Understanding Linear Algebra
    We will investigate the connection between the determinant of a matrix and its invertibility using Gaussian elimination. Consider the two upper triangular ...
  88. [88]
    [PDF] The 3 × 3 case. • Determinants n × n. • Formula for the inverse matrix.
    Determinant and Gauss elimination operations. Theorem 7 If E represents an elementary row operation and A is an n × n matrix, then det(EA) = det(E) det(A). The ...<|control11|><|separator|>
  89. [89]
    [PDF] Gaussian elimination without rounding
    The usual procedure to compute the determinant is the so-called Gaussian elim- ination. We can view this as the transformation of the matrix into a lower trian-.
  90. [90]
    A Hybrid Numerical Algorithm for Evaluating n-th Order Tridiagonal ...
    Jul 22, 2022 · The current article presents a new efficient and reliable hybrid numerical algorithm for evaluating general n-th order tridiagonal determinants in linear time.
  91. [91]
    [PDF] On Some Properties of Positive Definite Toeplitz Matrices and Their ...
    The Levinson-Durbin recursive algorithm provides a way of computing the successive rows of the. Cholesky factor L and the diagonal elements of D. We also note ...
  92. [92]
    [PDF] 17. Vandermonde determinants
    To determine the total degree of the determinant, invoke the usual formula for the determinant of a matrix. M with entries Mij, namely det M = ∑ π σ(π). Y i.
  93. [93]
    [PDF] New Parallel Algorithms for Finding Determinants of N × N Matrices
    The first method is Laplace expansion, which applies the divide-and-conquer pattern, and the other one is. LU Decomposition. 2. Methods of Solving Determinants.
  94. [94]
    [PDF] Key Concepts For Parallel Out-Of-Core LU Factorization - The Netlib
    Mar 28, 1996 · Let us consider the decomposition of the matrix A into its LU factorization with the matrix partitioned in the following way. Let us suppose ...
  95. [95]
    (PDF) Note on the History of (Square) Matrix and Determinant
    The paper outlines that Cramer and Cardano were pivotal in developing determinants, with Cardano's regula de modo influencing what is known today as Cramer's ...
  96. [96]
    [PDF] Determinants - Matrix Analysis and Applied Linear Algebra
    Leibniz (1646–1716), a German mathematician, was independently developing his own concept of the determinant together with applications of array manipu- lation ...Missing: history Cardano<|control11|><|separator|>
  97. [97]
    note on the history of (square) matrix and determinant - ResearchGate
    Jul 17, 2022 · This paper reviews the theory of matrices and determinants. Matrix and determinant are nowadays considered inseparable to some extent.
  98. [98]
    Matrices and determinants - MacTutor History of Mathematics
    It was Cauchy in 1812 who used 'determinant' in its modern sense. Cauchy's work is the most complete of the early works on determinants. He reproved the ...Missing: primary source
  99. [99]
    [PDF] A brief history of the Jacobian - HAL
    Feb 20, 2023 · The term 'Jacobian' refers to an important memoir (in Latin) of Jacobi on functional determinants (his terminology) published in 1841 [29]. ...
  100. [100]
    History of Sylvester's resultant? - linear algebra - MathOverflow
    Oct 28, 2020 · The resultant of the Sylvester matrix first appeared in JJ Sylvester, Philos. Magazine 16, 132–135 (1840): A method of determining by mere inspection the ...
  101. [101]
    [PDF] an introduction to von neumann algebras - NSF PAR
    Von Neumann algebras were introduced by von Neumann who developed their theory in a series of joint works with Murray in the 1930s-1940s.Missing: determinants | Show results with:determinants
  102. [102]
    [PDF] How Accurate is Gaussian Elimination?
    J.H. Wilkinson put Gaussian elimination (GE) on a sound numerical footing in the 1960s when he showed that with partial pivoting the method is stable in the ...
  103. [103]
    Felix Alexandrovich Berezin (A Brief Scientific Biography)
    Aug 7, 2025 · It contained, in particular, the notion of a 'superdeterminant' (this name emerged only later 27 ): ... ... F A Berezin; F.A. Berezin. English ...
  104. [104]
    Lecture 3.4: The determinant of a linear map
    Lecture notes from Clemson University defining the determinant for linear maps on vector spaces.
  105. [105]
    Lecture 2: Determinants | Multivariable Calculus
    MIT OpenCourseWare lecture on determinants, including the determinant of vectors.
  106. [106]
    DETERMINANTS 1. Introduction In these notes we discuss a simple ...
    University of Washington lecture notes on determinants, including the Leibniz formula.
  107. [107]
    Geometric properties of the determinant
    Explains the geometric interpretation of the determinant, including its relation to volume scaling and orientation.
  108. [108]
    The determinant
    Defines the determinant as the oriented volume of the parallelepiped spanned by the column vectors, discussing sign for orientation and zero determinant for degenerate cases.