Fact-checked by Grok 2 weeks ago

Computational complexity of matrix multiplication

The computational complexity of matrix multiplication is a central problem in algebraic complexity theory, concerned with determining the minimal number of arithmetic operations (additions and multiplications) required to compute the product of two matrices over a , such as the real or numbers. For two n × n matrices, the standard row-column performs Θ(n³) scalar multiplications and additions, but since 1969, a series of algorithms has progressively reduced the asymptotic exponent ω, where the complexity is O(n^ω), with the current best upper bound being ω < 2.371339 as of 2024. This exponent measures the intrinsic difficulty of the problem and has profound implications for algorithms in linear algebra, machine learning, scientific computing, and cryptography, as matrix multiplication is a bottleneck in operations like solving linear systems and neural network training. The quest for faster matrix multiplication began with Volker Strassen's 1969 discovery of a divide-and-conquer algorithm that multiplies two 2 × 2 matrices using only 7 scalar multiplications instead of 8, yielding an overall complexity of O(n^{log_2 7}) ≈ O(n^{2.807}) for n × n matrices by recursive application. This breakthrough shattered the perceived optimality of the cubic-time method and inspired subsequent improvements, including Victor Pan's 1978 algorithm achieving O(n^{2.795}), Dario Bini et al.'s 1979 approximate approach at O(n^{2.78}), and Arnold Schönhage's 1981 method reaching O(n^{2.548}). By 1990, Don Coppersmith and Shmuel Winograd introduced the "laser method" for tensor decomposition, pushing ω < 2.376, a technique that has underpinned most advances since. Further refinements followed, such as the 2012 work by Virginia Vassilevska Williams yielding ω < 2.3729, and the 2022 refinement by Ran Duan, Hongxun Wu, and Yu Zhou to ω < 2.371866 via asymmetric hashing. Recent progress has accelerated, driven by sophisticated algebraic geometry and combinatorial techniques. In 2023, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou improved the bound to ω ≤ 2.371552 using a new laser variant that analyzes higher-order tensor powers more efficiently. This was quickly surpassed in 2024 by Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou, who achieved ω < 2.371339 by exploiting asymmetries in rectangular matrix multiplication and eliminating hidden inefficiencies in prior decompositions. These algorithms, while theoretically groundbreaking, remain impractical for moderate n due to high constant factors and intricate implementations; in practice, optimized O(n^3) variants like those in BLAS libraries dominate. Despite these upper bounds approaching 2, the lower bound on ω remains stubbornly at 2, established trivially since matrix multiplication requires at least Ω(n^2) operations to read and write the output, with no non-trivial improvements proven in over 50 years. Conjectures suggest ω = 2, implying a quadratic-time algorithm exists, but barriers like the lack of progress on the case (where the minimal number of multiplications is unknown, bounded between 19 and 23) persist. Related problems, such as rectangular matrix multiplication (for m × n and n × p matrices) and its connections to the all-pairs shortest paths problem, further illuminate the challenges, with exponents like α (for n × n^α cases) also seeing recent advances to α ≥ 0.321334. The field remains active, with machine learning-aided discoveries and geometric methods promising further insights into this enduring open question.

Basic Algorithms

Standard Matrix Multiplication

The standard matrix multiplication algorithm, commonly known as the schoolbook method, computes the product C = AB of two n \times n matrices A and B by determining each entry c_{ij} of the resulting matrix C as the dot product of the i-th row of A and the j-th column of B, given by c_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj}. This approach directly implements the definition of matrix multiplication, where each of the n^2 entries in C requires n pairwise scalar multiplications followed by n-1 additions to sum the products. The time complexity of this algorithm is O(n^3) arithmetic operations, consisting of exactly n^3 scalar multiplications and n^3 - n^2 scalar additions, assuming constant-time operations on the matrix elements. This cubic dependence arises because the three nested loops—over rows of A, columns of B, and the summation index—each iterate n times. In terms of space complexity, the algorithm requires O(n^2) space to store the input matrices A and B as well as the output matrix C, with only O(1) additional space for temporary variables during computation. To illustrate, consider multiplying two $2 \times 2 matrices: A = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}, \quad B = \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}. The entry c_{11} is computed as a_{11} b_{11} + a_{12} b_{21}, requiring 2 multiplications and 1 addition. Similarly, c_{12} = a_{11} b_{12} + a_{12} b_{22}, c_{21} = a_{21} b_{11} + a_{22} b_{21}, and c_{22} = a_{21} b_{12} + a_{22} b_{22}, for a total of 8 multiplications and 4 additions, consistent with the general formula for n=2. Pseudocode for the general case is as follows:
for i from 1 to n:
    for j from 1 to n:
        c_ij = 0
        for k from 1 to n:
            c_ij = c_ij + a_ik * b_kj
This method forms the foundational approach taught in introductory linear algebra courses and traces its origins to the mid-19th century, when formalized matrix multiplication in 1855–1858 as a means to represent the composition of linear transformations.

Strassen's Algorithm

Strassen's algorithm, developed by in 1969, marked the first sub-cubic time algorithm for matrix multiplication, reducing the computational complexity from the standard O(n^3) to O(n^{\log_2 7}) \approx O(n^{2.807}) scalar multiplications for n \times n matrices over a ring. This breakthrough demonstrated that , which relies on O(n^3) operations, is not optimal for solving systems of linear equations, as faster matrix multiplication directly accelerates such problems. The algorithm uses a divide-and-conquer paradigm, recursively partitioning each n \times n matrix A and B (with n a power of 2) into four (n/2) \times (n/2) blocks: A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} and similarly for B. Rather than performing the eight block multiplications required by the conventional approach, it computes only seven intermediate products M_1 through M_7, using additional block additions and subtractions to reconstruct the result blocks C_{ij} = A B. For the base case of $2 \times 2 matrices with entries a_{ij} and b_{ij}, the products are: \begin{align*} M_1 &= (a_{11} + a_{22})(b_{11} + b_{22}), \\ M_2 &= (a_{21} + a_{22}) b_{11}, \\ M_3 &= a_{11} (b_{12} - b_{22}), \\ M_4 &= a_{22} (b_{21} - b_{11}), \\ M_5 &= (a_{11} + a_{12}) b_{22}, \\ M_6 &= (a_{21} - a_{11}) (b_{11} + b_{12}), \\ M_7 &= (a_{12} - a_{22}) (b_{21} + b_{22}). \end{align*} The output blocks are then formed as c_{11} = M_1 + M_4 - M_5 + M_7, c_{12} = M_3 + M_5, c_{21} = M_2 + M_4, and c_{22} = M_1 - M_2 + M_3 - M_6. This scheme requires 18 additions/subtractions per recursion level, compared to 8 for the standard method. The time complexity follows the recurrence T(n) = 7 T(n/2) + \Theta(n^2), where the \Theta(n^2) term accounts for the additions at each level; solving this via the master theorem yields T(n) = \Theta(n^{\log_2 7}). In practice, the recursive overhead and higher constant factors from the additions make the algorithm slower than the standard O(n^3) method for small n, with benefits emerging only for matrices larger than a few hundred dimensions, such as n \gtrapprox 500 in optimized implementations. Strassen-like methods extend this idea to larger block sizes, such as $3 \times 3 partitions, which compute the product using 23 multiplications instead of the naive 27, potentially reducing constants for certain practical scenarios despite not improving the asymptotic exponent.

The Matrix Multiplication Exponent

Definition and Historical Development

The matrix multiplication exponent, denoted \omega, is defined as the infimum of the real numbers \alpha such that the product of two n \times n matrices over an arbitrary field (such as the real or complex numbers) can be computed using O(n^\alpha) arithmetic operations in the limit as n \to \infty. This measure captures the asymptotic arithmetic complexity of the problem, focusing on the exponent rather than leading constants or lower-order terms. Trivially, \omega \geq 2, since the output is an n \times n matrix with n^2 entries, each of which requires at least a constant number of operations in the worst case, with a more rigorous argument coming from the rank of the output matrix. The classical O(n^3) algorithm for matrix multiplication, based on the straightforward summation over indices, immediately establishes the upper bound \omega \leq 3. The historical development of bounds on \omega began in earnest with Volker Strassen's seminal 1969 paper, which introduced the first sub-cubic algorithm by showing that two $2 \times 2 matrices could be multiplied using only 7 scalar multiplications instead of 8, leading to the recursive upper bound \omega \leq \log_2 7 \approx 2.807. This breakthrough shifted the focus to algebraic techniques, viewing matrix multiplication as the evaluation of a trilinear form or, equivalently, the decomposition of the matrix multiplication tensor \langle n, n, n \rangle. In the 1970s, researchers like Winograd, Pan, and Schönhage built on this by seeking low-rank tensor decompositions for larger block sizes, gradually improving the upper bounds through manual and semi-automated searches for efficient bilinear algorithms. These algebraic methods established the tensor rank as a key tool for bounding \omega, since the minimal number of multiplications needed relates directly to the rank of the tensor, and asymptotic extensions allow recursive constructions for arbitrary n. A major advance came in 1986 with the introduction of the "laser method" by , which systematically analyzes powers of structured tensors to eliminate redundant computations. applied and refined this method in their 1987 work, yielding \omega \leq 2.376. Their follow-up 1990 work refined this using arithmetic progressions to construct improved rectangular matrix multiplications, implying the tighter square bound \omega \leq 2.3755. Subsequent improvements were sporadic until the 2010s, when refined laser techniques and automated searches produced incremental gains, such as \omega \leq 2.3737 by in 2010 and \omega \leq 2.3728639 by in 2014. Further refinements include \omega < 2.37286 by in 2020. The 2023 work by achieved \omega \leq 2.371552. The most recent milestone is the 2024 paper by , which exploits greater asymmetry in tensor analyses to achieve \omega < 2.371339.
YearAuthorsUpper Bound on \omega
1969Strassen2.807
1981Schönhage2.522
1987Coppersmith & Winograd2.376
1990Coppersmith & Winograd2.3755
2010Stothers2.3737
2014Le Gall2.3728639
2020Alman & Williams2.37286
2023Williams et al.2.371552
2024Alman et al.2.371339
As of 2025, no major breakthroughs have surpassed the 2024 bound, and \omega is widely conjectured to equal 2—matching the lower bound—though this remains unproven. The pursuit of tighter bounds on \omega has profound implications beyond direct matrix operations, as faster multiplication accelerates algorithms in graph theory (e.g., all-pairs shortest paths in O(n^\omega) time), linear programming, and machine learning tasks like transformer models that rely on repeated matrix products.

Upper Bounds and Algorithmic Improvements

Following Strassen's breakthrough, subsequent improvements to the upper bound on the matrix multiplication exponent ω have relied on a combination of algebraic, combinatorial, and geometric techniques to construct faster algorithms. Algebraic approaches, such as , seek low-rank representations of the matrix multiplication to enable recursive algorithms with reduced complexity. Combinatorial methods, exemplified by the , systematically search for structured decompositions that minimize the number of multiplications needed. Geometric perspectives, including analyses of and asymptotic , provide tools to bound the efficiency of these decompositions by considering symmetries and overlaps in high-dimensional spaces. In the 1980s, variants of the Coppersmith-Winograd algorithm advanced the state-of-the-art using algebraic constructions based on arithmetic progressions and tensor powers, achieving ω < 2.376. This marked a significant refinement over Strassen's ω < 2.807, by exploiting higher-degree extensions of the bilinear map for matrix multiplication to reduce the exponent through recursive application. Building on this foundation, Virginia Vassilevska Williams introduced an automated combinatorial framework in 2012 that enumerates potential tensor decompositions, yielding ω < 2.3729. Further progress came from refinements to the laser method, originally proposed by Strassen for detecting low-rank subtensors. In 2020, Josh Alman and Virginia Vassilevska Williams refined this method by introducing asymmetric hashing and improved marginal distribution analyses to mitigate "combination loss"—the inefficiency arising from unintended overlaps in tensor slices—applied to powers of the , achieving ω < 2.37286. The laser method operates by identifying witness tensors that certify low-rank decompositions; it labels overlapping blocks as "garbage" for elimination while preserving valuable computations, with refinements enhancing the recovery of signal from noise in large tensor powers. In 2022, the project by Fawzi et al. used reinforcement learning to discover novel small-scale tensor decompositions for matrix multiplication, improving algorithms for fixed small dimensions but not directly advancing the asymptotic exponent ω. Recent advancements have focused on eliminating hidden inefficiencies in the laser method's hashing and combination steps. In 2023, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou developed a variant incorporating alpha-to-omega connections, improving asymmetry in tensor constructions to achieve ω ≤ 2.371552. In 2024, Alman, Duan, Williams, Yinzhan Xu, Zixuan Xu, and Zhou further refined the approach with increased asymmetry in hashing, yielding the current best upper bound of ω < 2.371339 through more precise control over decomposition ranks in asymmetric settings. These 2024 breakthroughs eliminated subtle losses in prior methods by optimizing witness tensor selections and reducing overhead in recursive recursions. Despite these theoretical gains, practical implementations of these algorithms suffer from large hidden constants and intricate recursion depths, preventing real-world speedups over classical methods like Strassen's for matrices up to thousands in dimension. For instance, the constant factors in laser-based algorithms grow exponentially with recursion levels, making them slower than O(n^3) implementations on modern hardware for typical sizes encountered in machine learning or scientific computing. Open challenges persist in closing the gap to ω = 2, which would imply near-linear time matrix multiplication and profound implications for algebraic complexity classes. While current methods suggest potential achievability over finite fields—where modular arithmetic might enable tighter decompositions—no such bound has been realized, and bridging the theoretical-practical divide remains a key barrier.

Theoretical Frameworks

Algebraic Reformulations

Matrix multiplication can be reformulated algebraically as a trilinear form on three matrices A, B, C \in \mathbb{C}^{n \times n}, defined by \langle A, B, C \rangle = \trace(A B C^T) = \sum_{i,j,k} A_{ij} B_{jk} C_{ik}, which equals the Frobenius inner product \langle A B, C^T \rangle_F of the product matrix A B and C^T. This perspective views the operation as a trilinear map from the tensor product \mathbb{C}^{n^2} \otimes \mathbb{C}^{n^2} \otimes \mathbb{C}^{n^2} to \mathbb{C}, encapsulated by the matrix multiplication tensor M\langle n \rangle \in \mathbb{C}^{n^2} \otimes \mathbb{C}^{n^2} \otimes \mathbb{C}^{n^2}, where the tensor's structure encodes the bilinear pairings between input matrices to produce the output. Such a representation shifts the focus from direct algorithmic steps to the intrinsic algebraic complexity of decomposing this tensor. The computational complexity is intimately tied to the tensor rank of M\langle n \rangle, defined as the smallest integer r such that the tensor decomposes as a sum of r rank-1 tensors, each of the form u \otimes v \otimes w for vectors u, v, w \in \mathbb{C}^{n^2}. A decomposition of rank r corresponds to an algorithm computing the multiplication using r scalar multiplications, establishing an upper bound on the arithmetic complexity. The matrix multiplication exponent \omega is then given by \omega = \inf \{ \tau \mid \rank(M\langle n \rangle) = O(n^\tau) \}, where the infimum is over real \tau \geq 2, linking the asymptotic growth of the tensor rank directly to the exponent. Over algebraically closed fields like \mathbb{C}, this rank measures the minimal dimension of intermediate spaces in bilinear algorithms for the map. Distinguishing between exact rank and border rank is crucial for upper bounds, as the latter allows approximations via limits of rank-r tensors, providing a more flexible measure for asymptotic analysis. Border rank equals the usual rank in finite dimensions but permits schemes where intermediate computations approach the exact tensor, yielding valid algorithms in the limit. For instance, Strassen's algorithm achieves an exact rank-7 decomposition for the $2 \times 2 case, M\langle 2 \rangle, by expressing the eight entries of the product as seven nonlinear combinations of the inputs, reducing multiplications from eight to seven. This decomposition is a slice of the full tensor, where the $2 \times 2 multiplication embeds into larger instances, enabling recursive generalization to arbitrary n via block partitioning. A group-theoretic reformulation leverages representation theory of the general linear group GL(n, \mathbb{C}), under which M\langle n \rangle transforms as the tensor product of the standard representation and its duals. Algorithms correspond to decompositions into rank-r factors invariant under the subgroup \Gamma \leq GL(n)^6 acting diagonally on inputs and outputs, with the symmetry group \Gamma \cong S_3 \rtimes \mathbb{Z}_2^3 for the n \times n case facilitating equivariant analyses. Such decompositions exploit the irreducible representations of GL(n), labeled by Young diagrams, to bound the multiplicity of components in the tensor's expansion, thereby constraining possible ranks. These algebraic frameworks enable the application of tools from algebraic geometry, such as secant varieties \sigma_r(V)—the closures of spans of r points on the V of rank-1 tensors—to study the dimension and membership of M\langle n \rangle in \sigma_r, providing both upper and structural bounds on tensor ranks. For example, representation-theoretic decompositions of ideals in coordinate rings yield lower bounds on ranks by identifying vanishing multiplicities of invariants. This geometric lens has been instrumental in refining estimates for \omega, bridging pure algebra with computational bounds.

Lower Bounds

The matrix multiplication exponent \omega satisfies \omega \geq 2, as the output matrix consists of n^2 entries that are linearly independent over the field in general position, each requiring at least one arithmetic operation, thereby necessitating \Omega(n^2) scalar multiplications and additions in total. In the unrestricted model of algebraic circuits, the strongest known lower bounds for the tensor rank (corresponding to the number of scalar multiplications) remain quadratic with leading coefficient 3, specifically at least $3n^2 - 2\sqrt{2} n^{3/2} - 3n for multiplying n \times n matrices over algebraically closed fields of characteristic zero. In restricted models, nonlinear lower bounds exceeding the trivial \Omega(n^2) have been established. For arithmetic circuits with bounded coefficients (constants of absolute value at most a fixed integer) over the reals or complexes, Ran Raz proved that computing the n \times n matrix product demands \Omega(n^2 \log n) operations, using a geometric variant of rigidity and properties of bilinear circuits. Similarly, over finite fields, Amir Shpilka demonstrated lower bounds on the number of product gates in bilinear circuits, requiring \Omega(n^{2} \log n / \log \log n) such gates in some cases, while Raz and Shpilka extended this to superlinear edge counts, \Omega(n^{1+\epsilon}) for any \epsilon > 0, in constant-depth circuits with arbitrary gates. These results show \omega > 2 in depth-bounded or gate-restricted settings but do not extend to general circuits. Communication complexity techniques provide additional insights into limitations, particularly in distributed and models. For distributed-memory computations, the amount of data transferred between processors to perform n \times n is at least \Omega(n^2 / \sqrt{M}) words, where M is the local per ; this bound, derived via Loomis-Whitney inequalities on computation graphs, implies that achieving \omega = 2 would require asymptotically optimal communication, yet the proven lower bound exceeds what quadratic-time algorithms can achieve without excessive transfers, indirectly supporting \omega > 2 in practical settings. Geometric complexity theory (GCT) offers a representation-theoretic framework for proving lower bounds on \omega, by comparing invariants of the matrix multiplication tensor to those of simpler polynomials like the determinant. Introduced by Ketan Mulmuley and Milind Sohoni, GCT embeds algebraic complexity into orbit closures under group actions, enabling proofs via occurrence obstructions and symmetry. Landsberg, Michalek, and others applied GCT to establish explicit border rank lower bounds for the matrix multiplication tensor, such as \underline{R}(M_m) \geq \frac{3}{2} m^2 - 2 for m \times m multiplication, exceeding the trivial rank m^2 by a constant factor using plethysm coefficients and Young diagram multiplicities. However, these yield only mildly super-quadratic tensor ranks for fixed m and do not produce asymptotic \omega > 2 + \epsilon for any \epsilon > 0. Over finite fields, the exponent \omega generally aligns with characteristic-zero cases for large fields, but field-dependent phenomena arise in small characteristic, where certain Strassen-like algorithms may achieve effective \omega = 2 under modular constraints without divisions, though no general algebraic algorithm reaches this bound. Proving \omega > 2 in the general algebraic model remains elusive due to fundamental barriers linking it to broader complexity separations. Such a proof would imply super-quadratic lower bounds on algebraic circuits for problems like the permanent, separating the complexity classes VP (verifiable in polynomial size) from VNP (nondeterministic polynomial size), the algebraic analogue of P versus NP; GCT targets this via representation theory but faces challenges in computing invariants for high-degree symmetries and handling approximation by borders. As of 2025, no super-quadratic lower bounds exist for unrestricted algebraic circuits computing matrix multiplication, with ongoing efforts in GCT and border rank yielding only incremental improvements over the quadratic threshold.

Variants and Extensions

Rectangular Matrix Multiplication

Rectangular matrix multiplication involves computing the product of an m \times p matrix and a p \times n matrix, resulting in an m \times n output matrix. The standard schoolbook performs this operation using O(m n p) arithmetic operations by summing over the p shared dimensions for each of the m n output entries. Subcubic algorithms extend techniques from multiplication, such as Strassen's method, to rectangular cases where the dimensions differ significantly. These methods aim to reduce the exponent in the , often parameterized by the infimum \alpha(m, n, p) such that the multiplication requires O(\max(m, n, p)^{\alpha(m, n, p)}) operations, assuming the dimensions are scaled appropriately relative to the largest one. When m = n = p, this reduces to the square matrix multiplication exponent , and improvements in rectangular bounds can imply better upper bounds for \omega through blocking strategies that divide square matrices into rectangular blocks. A seminal result is due to , who developed an algorithm for multiplying n \times n^k and n^k \times n matrices in O(n^2 (\log n)^2) scalar multiplications for specific small k, achieving subcubic complexity for balanced rectangular cases where the inner dimension p = n^\beta with \beta < 1. For instance, when \beta \approx 0.172, the time is O(n^{2 + o(1)}), improving over the schoolbook O(n^{2 + \beta}) and marking the first such subcubic rectangular algorithm. Such algorithms find applications in data analysis, particularly with rectangular matrices common in least squares problems, where forming the normal equations involves multiplying a tall thin design matrix by its transpose, benefiting from reduced complexity in unbalanced cases like m \gg n. Recent advancements in the 2020s leverage refined versions of the laser method to obtain faster algorithms for highly unbalanced rectangular multiplication, such as when m \ll n. For example, a 2024 refinement introduces greater asymmetry in tensor constructions, yielding improved exponents for rectangular shapes and enabling times asymptotically better than prior methods for cases where one dimension is much smaller than the others.

Fast Matrix Multiplication in Other Contexts

Fast matrix multiplication algorithms, including Strassen's method and subsequent improvements like Coppersmith-Winograd variants, apply directly over finite fields, achieving the same asymptotic complexity exponent ω ≤ 2.372 as over the real or complex numbers, since these algorithms rely on tensor decompositions and algebraic identities valid in any field of characteristic zero or sufficiently large characteristic. In fields supporting fast Fourier transforms (FFT), such as those with primitive roots of unity, structured matrix multiplications can leverage FFT for convolution-based reductions, but the general dense case retains the algebraic exponent without achieving ω = 2. For general finite fields without such structure, the complexity remains governed by the algebraic upper bounds, though numerical stability concerns are absent. Specific optimizations for small finite fields, like GF(2), exploit bit-level operations where addition is XOR and multiplication is AND; here, hierarchical Strassen-like algorithms for dense matrices over GF(2) achieve practical speedups by minimizing gate operations, with implementations running in O(n^ω) time adapted to bit-packed representations. For integer matrices, exact computation requires handling large intermediate values to avoid overflow, often addressed by performing multiplications modulo several primes and reconstructing the result using the , which preserves the O(n^ω) asymptotic complexity while bounding bit lengths. Strassen's algorithm and its variants are adapted for exact integer arithmetic by recursively applying the decomposition in modular rings, enabling faster computation for large integers compared to naive methods, though constant factors and modular overhead must be managed for practicality. These approaches are particularly useful in cryptographic applications or exact linear algebra, where floating-point approximations are unsuitable. In the Boolean semiring, where addition is logical OR and multiplication is AND, matrix multiplication computes the transitive closure of a directed graph and is closely related to the all-pairs shortest paths (APSP) problem in the (min, +) semiring via combinatorial reductions. The complexity exponent for Boolean matrix multiplication, denoted ω_B, matches the algebraic ω ≤ 2.372 through embeddings into numeric fields, allowing the use of fast algebraic algorithms, though pure combinatorial methods yield slightly higher exponents around 2.529. Recent improvements in the algebraic exponent directly lower ω_B, impacting applications like graph connectivity and pattern matching. In parallel and distributed models, such as the , fast matrix multiplication achieves parallel time complexity O(log n) using O(n^ω / log n) processors via recursive decomposition and , yielding total work O(n^ω) matching the sequential bound. In distributed-memory settings, communication-optimal algorithms like minimize bandwidth to O(n^ω / √M) per processor, where M is local memory, but lower bounds from geometric inequalities imply ω > 2, as subquadratic communication would contradict known volume constraints for dense computation. Quantum algorithms for offer no asymptotic speedup beyond the classical O(n^ω) for computing the full dense product over fields, with query matching the classical Θ(n^2) in the algebraic setting. However, Grover's search provides speedups for unstructured subproblems like element-wise or sparse cases, potentially accelerating hybrid routines, though dense output computation remains bottlenecked by classical limits. In 2025, advancements have focused on practical matrix multiplication, including enhancements to the Ozaki scheme that enable high-accuracy computation using low-precision units in , achieving up to 1.5× speedups over prior methods in optimized libraries supporting BLAS-like operations. These improvements emphasize reduced errors and better exploitation of modern architectures for exact in scientific .

Connections to Linear Algebra Operations

The computational complexity of directly influences several core linear algebra operations, as these tasks can often be reformulated or accelerated using fast matrix multiplication algorithms, achieving the asymptotic bound of O(n^\omega) arithmetic operations, where \omega \leq 2.371339 as of 2024 is the current best-known upper bound on the matrix multiplication exponent. This equivalence arises because operations like inversion, , and computation involve recursive block decompositions where updates rely on matrix products, allowing substitution of faster multiplication routines without increasing the overall exponent. Matrix inversion is computationally equivalent to matrix multiplication in algebraic complexity models, enabling inversion of an n \times n nonsingular in O(n^\omega) operations. A foundational by Bunch and Hopcroft employs recursive triangular , where block updates via fast compute the Schur complements and yield both the and inverse simultaneously. An alternative approach uses the Cayley-Hamilton theorem, which states that a satisfies its own , allowing the inverse to be expressed as a of powers up to degree n-1; these powers are computed recursively with , again in O(n^\omega) time. The Bunch-Hopcroft method exemplifies these connections, as its block-wise elimination leverages Schur complements—defined as S = A_{22} - A_{21}A_{11}^{-1}A_{12}—which require inverting smaller blocks and performing multiplications that scale with \omega. Gaussian elimination, the standard method for solving linear systems or computing factorizations, traditionally requires O(n^3) operations due to successive row reductions. However, fast variants accelerate this by partitioning the matrix into blocks and using for efficient updates of trailing submatrices, particularly the Schur complements in each elimination step, reducing the complexity to O(n^\omega). This block-recursive structure mirrors that of fast multiplication algorithms, ensuring that improvements in \omega propagate directly to elimination without additional overhead. Determinant computation, essential for assessing matrix invertibility and solving systems, also achieves O(n^\omega) complexity through methods that embed the problem into products. Techniques such as dynamic programming over the minors of the or interpolation of the reduce the task to a sequence of s and additions, with the exponent determined by the multiplication bottleneck. For instance, recursive evaluation of the can be optimized by fast of bordered matrices formed from minors. These reductions have significant implications for solving linear systems Ax = b, where computing the inverse or an factorization in O(n^\omega) followed by forward-back substitution in O(n^2) yields an overall of O(n^\omega), far surpassing the classical O(n^3) for large n. Such efficiencies underpin numerical libraries and scientific applications reliant on linear algebra. The foundational links between and these operations were established in the 1970s, building on Strassen's breakthrough, with pivotal contributions from researchers like Shmuel Winograd who analyzed the algebraic equivalences in computational models over arbitrary fields.

Arithmetic Circuit Complexity

Arithmetic circuit complexity addresses the minimal number of multiplication operations required to compute the product of two n \times n matrices over a field, treating additions as cost-free. An arithmetic circuit is modeled as a directed acyclic graph whose nodes are either input variables (entries of the input matrices A and B), constants from the field, or gates performing addition or multiplication, with the circuit size defined as the number of multiplication gates. The outputs correspond to the entries of the product matrix C = AB, and the goal is to minimize this size while correctly computing the bilinear form underlying matrix multiplication. The problem of finding the circuit with the fewest multiplications for n \times n matrix multiplication has been studied extensively for small n, with asymptotic implications for larger sizes. For $2 \times 2 matrices, established in 1969 that 7 multiplications are both necessary and sufficient, serving as a foundational result for subsequent developments. Extending such ideas recursively, methods inspired by the for integer multiplication can achieve O(n^2 \log n) multiplications for certain rectangular cases, such as n \times 2^n matrices, by decomposing the computation into fewer subproblems. However, for general square n \times n multiplication, the exact minimal number remains unknown beyond small n. Lower bounds on the number of multiplications provide fundamental limits. A general lower bound of \Omega(n^2) follows from the fact that the output matrix has n^2 independent entries, each requiring at least one multiplication in the worst case. For n=3, Laderman demonstrated in 1976 an upper bound of 23 multiplications, with the best known lower bound being 19 multiplications, as established by Matthias Bläser in 2003, highlighting the gap for this case. These bounds underscore that while \Omega(n^2) is achievable in the limit, superquadratic growth is expected based on tensor rank considerations. The matrix multiplication exponent \omega is precisely the infimum over real numbers c such that the minimal circuit size s(n) = O(n^c), meaning improvements in s(n) for small n can potentially lower \omega through recursive application, though the exact relation depends on the recursion structure and fan-out. In practice, such circuits find application in signal processing for multiplying by constant matrices, where the fixed structure allows precomputation of minimal-multiplication formulas; for instance, approximations of the discrete cosine transform (DCT) used in image compression can be realized with significantly fewer than n^3 multiplications by exploiting circuit optimizations tailored to the transform matrix. Despite progress for small n—such as verified minima for n \leq 10 in some fields—the exact minimal number of multiplications for general n remains an , with ongoing efforts focusing on techniques to close gaps between .

References

  1. [1]
    [PDF] Computational Complexity of Matrix Multiplication
    Abstract. This survey provides an overview of the computational complexity of matrix multiplication, a pivotal operation in computational disciplines.
  2. [2]
    New Bounds for Matrix Multiplication: from Alpha to Omega - arXiv
    Jul 16, 2023 · The paper introduces a new laser method, improving the matrix multiplication exponent bound to \omega\le 2.371552 and \alpha \ge 0.321334.
  3. [3]
    [2404.16349] More Asymmetry Yields Faster Matrix Multiplication
    Apr 25, 2024 · The method yields a new bound on the square matrix multiplication exponent \omega<2.371339, improved from the previous bound of \omega<2.371552 ...Missing: ω | Show results with:ω
  4. [4]
  5. [5]
    Discovering faster matrix multiplication algorithms with ... - Nature
    Oct 5, 2022 · This upper bound on the tensor rank is obtained by summing the matrix ranks of the slices of the tensor. TensorGame over rings. We say that the ...
  6. [6]
    [PDF] Lecture 2: Approximating Matrix Multiplication
    The problem is the following: given an arbitrary m×n matrix A and an arbitrary n×p matrix B, compute, exactly or approximately, the product AB. As a starting ...Missing: pseudocode | Show results with:pseudocode
  7. [7]
    [PDF] A Brief History of Linear Algebra and Matrix Theory
    Matrix algebra was nurtured by the work of Arthur Cayley in 1855. Cayley studied compositions of linear transformations and was led to define matrix ...<|separator|>
  8. [8]
    Programming linear operators - Stanford Exploration Project
    The following pseudocode does matrix multiplication $\bold y=\bold B\bold x ... Next we restate the matrix-multiply pseudo code in real code, in a language called ...
  9. [9]
    Gaussian elimination is not optimal | Numerische Mathematik
    Strassen, V. Gaussian elimination is not optimal. Numer. Math. 13, 354–356 (1969). https://doi.org/10.1007/BF02165411
  10. [10]
    [PDF] Fast and Practical Strassen's Matrix Multiplication using FPGAs - arXiv
    Jun 4, 2024 · Strassen's algorithm improves this to O(n2.807), but its practicality is limited for small to medium matrix sizes due to the large number of.<|control11|><|separator|>
  11. [11]
    [PDF] New ways to multiply 3 × 3-matrices
    May 27, 2019 · Several bilinear algorithms for matrix multiplication. Tech. rep., Technical report. Strassen, V., 1969. Gaussian elimination is not optimal.
  12. [12]
    [PDF] arXiv:1812.06952v4 [cs.CC] 5 Mar 2022
    Mar 5, 2022 · As we said before, the matrix multiplication exponent ω is defined as the infimum over all β for which the arithmetic complexity of multiplying ...
  13. [13]
    [PDF] Algebraic Complexity Theory and Matrix Multiplication
    Table 1: History of the main improvements on the exponent of square matrix multiplication. Upper bound. Year Reference. Notes ω ≤ 3. Trivial algorithm ω < 2.81.
  14. [14]
  15. [15]
  16. [16]
    [1401.7714] Powers of Tensors and Fast Matrix Multiplication - arXiv
    Jan 30, 2014 · This paper presents a method to analyze the powers of a given trilinear form (a special kind of algebraic constructions also called a tensor)
  17. [17]
    Limits on the Universal Method for Matrix Multiplication
    For example, in their landmark paper, Coppersmith and Winograd (J. Symbolic Computation, 1990) achieved a bound of ω≤2.376, by applying the Laser Method ...
  18. [18]
    Multiplying matrices faster than coppersmith-winograd
    We develop an automated approach for designing matrix multiplication algorithms based on constructions similar to the Coppersmith-Winograd construction.
  19. [19]
    A Refined Laser Method and Faster Matrix Multiplication - arXiv
    Oct 12, 2020 · Title:A Refined Laser Method and Faster Matrix Multiplication. Authors:Josh Alman, Virginia Vassilevska Williams. View a PDF of the paper ...
  20. [20]
    New Breakthrough Brings Matrix Multiplication Closer to Ideal
    Mar 7, 2024 · This led to an additional improvement of omega's upper bound, reducing it to 2.371552. The authors also generalized that same technique to ...
  21. [21]
    [PDF] THE COMPLEXITY OF MATRIX MULTIPLICATION
    Nov 26, 2018 · This talk has two goals: (i) report on progress in the last four years regarding upper and lower bounds for the complexity of matrix ...
  22. [22]
    [PDF] Tensor Rank and Complexity - arXiv
    Jul 29, 2022 · The goal of the lectures was to provide an introduction to different notions of tensor rank and their relation to the asymptotic complexity of.
  23. [23]
    [PDF] Lecture 9: Algorithms for Matrix Multiplication Part I 1 Tensors and ...
    Nov 16, 2021 · With this, we now define the exponent of matrix multiplication. ... We now further motivate using tensor rank for matrix multiplication.
  24. [24]
    [PDF] The role of tensor rank in the complexity analysis of bilinear forms
    Example (Bini, Capovani, Lotti, Romani 1980). The tensor. A = 1 0. 0 1. ;. 0 1. 0 ... is associated with n × n matrix multiplication. All the matrices in any ...
  25. [25]
    [PDF] Strassen's 2 × 2 matrix multiplication algorithm - arXiv
    Feb 13, 2019 · Strassen's algorithm multiplies 2x2 matrices with 7 multiplications, instead of 8, and is a bilinear algorithm.
  26. [26]
    New lower bounds for the border rank of matrix multiplication - arXiv
    Dec 27, 2011 · Using techniques from algebraic geometry and representation theory, we show the border rank is at least 2n^2-n. Our bounds are better than the ...
  27. [27]
    Algebraic geometry and representation theory in the study of matrix ...
    Algebraic geometry and representation theory in the study of matrix multiplication complexity and other problems in theoretical computer science · Abstract.
  28. [28]
    [PDF] Limits on All Known (and Some Unknown) Approaches to Matrix ...
    We then design a suite of techniques for proving lower bounds on the value of ω, the exponent of matrix multiplication, which can be achieved by algorithms ...
  29. [29]
    New Lower Bounds for the Rank of Matrix Multiplication - SIAM.org
    These normal forms will be useful to prove lower complexity bounds and possibly even to determine new decompositions. We derive a border rank version of the ...
  30. [30]
    [cs/0201001] Lower Bounds for Matrix Product - arXiv
    Jan 2, 2002 · Title:Lower Bounds for Matrix Product. Authors:Amir Shpilka. View a PDF of the paper titled Lower Bounds for Matrix Product, by Amir Shpilka.Missing: Umans | Show results with:Umans
  31. [31]
    Lower Bounds for Matrix Product in Bounded Depth Circuits with ...
    We prove superlinear lower bounds for the number of edges in constant depth circuits with n inputs and up to n outputs. Our lower bounds are proved for all ...
  32. [32]
    Communication lower bounds for distributed-memory matrix ...
    We present lower bounds on the amount of communication that matrix multiplication algorithms must perform on a distributed-memory parallel computer.
  33. [33]
    Communication Lower Bounds for Distributed-Memory Computations
    Jul 6, 2013 · We give lower bounds on the communication complexity required to solve several computational problems in a distributed-memory parallel machine.
  34. [34]
    [PDF] Geometry and Complexity Theory
    Preface ix. Chapter 1. Introduction. 1. §1.1. Matrix multiplication. 1. §1.2. Separation of algebraic complexity classes.
  35. [35]
    [1210.8368] Explicit Lower Bounds via Geometric Complexity Theory
    Oct 31, 2012 · We prove the lower bound R(M_m) \geq 3/2 m^2 - 2 on the border rank of m x m matrix multiplication by exhibiting explicit representation ...
  36. [36]
    Is the matrix multiplication exponent ω independent from the choice ...
    Jul 14, 2023 · The matrix multiplication exponent, usually denoted by ωF, is the smallest real number for which any two n×n matrices over a field F can be ...
  37. [37]
    What lower bounds are known for matrix multiplication?
    Aug 25, 2025 · In the general case, as far as I know, the best (algebric) known lower bounds are 2n2−n multiplications for square matrices (Landsberg ...
  38. [38]
  39. [39]
    [PDF] Efficient Multiplication of Dense Matrices over GF(2)
    We describe an efficient implementation of a hierarchy of algorithms for multiplication of dense matrices over the field with two elements (F2).
  40. [40]
    [PDF] Fast Large-Integer Matrix Multiplication - Universiteit Leiden
    Jul 11, 2018 · Abstract. This thesis provides two perspectives on fast large-integer matrix multiplication. First, we will cover the complexity theory ...<|separator|>
  41. [41]
    [PDF] Communication-optimal parallel 2.5D matrix multiplication and LU ...
    The communication complexity of this algorithm serves as a matching upper-bound for our general lower bound. 2.1 Communication lower bounds for linear algebra.
  42. [42]
    [PDF] Quantum algorithms for matrix multiplication and product verification
    This shows that the quantum query complexity of matrix multiplication is Θ(n2), which is the same as the classical query complexity. Similarly, no quantum ...
  43. [43]
    Performance enhancement of the Ozaki Scheme on integer matrix ...
    Jan 9, 2025 · This study focused on researching high-performance matrix multiplication algorithms that maximize the potential of the latest architectures.3. Proposed Methods · 4. Numerical Examples · 5. Rounding Error Analysis
  44. [44]
    [PDF] ON THE COMPLEXITY OF COMPUTING DETERMINANTS
    The exponent η = 2.697263 seems to be the best that is known today for the division-free determinant problem. By the technique of Baur and Strassen (1983) we ...<|control11|><|separator|>
  45. [45]
    [PDF] Arithmetic Circuit Complexity - DROPS
    This tutorial will give an overview of arithmetic circuit complexity, focusing on the problem of proving lower bounds for arithmetic circuits. In the first part ...