Fact-checked by Grok 2 weeks ago

Matrix multiplication algorithm

Matrix multiplication algorithms are computational procedures for determining the product of two matrices A and B, where the entry in the i-th row and j-th column of the resulting matrix C is given by c_{ij} = \sum_{k=1}^n a_{ik} b_{kj}, with n denoting the of square matrices. The standard algorithm implements this formula using three nested loops—one for each index i, j, and k—yielding a of O(n^3) arithmetic operations for n \times n matrices. This operation forms a of linear algebra and numerical computing, enabling solutions to systems of equations, eigenvalue computations, and transformations in fields ranging from physics simulations to . Its efficiency profoundly influences the performance of higher-level algorithms in , such as training, and in graphics processing for rendering 3D scenes. Optimized libraries like BLAS and incorporate tuned implementations of to exploit hardware features, achieving near-peak performance on modern processors. Efforts to accelerate have driven decades of research in algebraic , aiming to surpass the cubic barrier through algebraic identities and recursive strategies. In 1969, devised the first sub-cubic algorithm, which recursively partitions matrices into 2×2 blocks and performs only seven multiplications instead of eight, attaining a of O(n^{\log_2 7}) \approx O(n^{2.807}). Building on this, algorithms by and Winograd (1990) reduced the exponent to about 2.376, with further refinements like those of (2012) pushing it to approximately 2.373 and subsequent work to about 2.371 as of 2024; however, these gains often involve impractical constants and overheads for non-asymptotic sizes. Recent advances, including learning-based discovery of tensor decompositions, continue to explore ways to lower the exponent toward the theoretical minimum of 2 while balancing practicality.

Basic Sequential Algorithms

Naive Iterative Algorithm

The naive iterative algorithm computes the product C = AB of two n \times n matrices A and B by evaluating each entry C_{i,j} as the of the i-th row of A and the j-th column of B, specifically C_{i,j} = \sum_{k=0}^{n-1} A_{i,k} \cdot B_{k,j}. This direct implementation follows the foundational definition of , which represents the composition of linear transformations. The algorithm employs three nested loops to iterate over all indices i, j, and k. Assuming zero-initialized matrix C, the pseudocode is as follows:
for i from 0 to n-1 do
    for j from 0 to n-1 do
        C[i][j] = 0
        for k from 0 to n-1 do
            C[i][j] = C[i][j] + A[i][k] * B[k][j]
        end for
    end for
end for
This structure ensures each of the n^2 entries in C is computed independently through n multiplication and addition operations, leading to a time complexity of O(n^3) arithmetic operations in the worst case. The space complexity is O(n^2), dominated by the storage required for the input and output matrices. To illustrate, consider the multiplication of two $2 \times 2 matrices A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} and B = \begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix}. The entry C_{0,0} = 1 \cdot 5 + 2 \cdot 7 = 19, C_{0,1} = 1 \cdot 6 + 2 \cdot 8 = 22, C_{1,0} = 3 \cdot 5 + 4 \cdot 7 = 43, and C_{1,1} = 3 \cdot 6 + 4 \cdot 8 = 50, yielding C = \begin{pmatrix} 19 & 22 \\ 43 & 50 \end{pmatrix}. Verification confirms the result matches the linear transformation composition. The concept of matrix multiplication underlying this algorithm dates back to the 19th century, with formalization by Arthur Cayley in 1858 to describe linear transformation compositions, but the iterative implementation was standardized in computational contexts during the mid-20th century amid advances in numerical linear algebra for digital computers.

Cache-Optimized Iterative Algorithms

The naive iterative algorithm for matrix multiplication exhibits poor spatial and temporal locality on modern processors with hierarchical caches, primarily due to row-major storage conventions in languages like C and Fortran. In the standard ijk loop ordering, for each fixed i and j, the inner loop strides through columns of matrix B, causing non-contiguous accesses that evict frequently used data from the cache; this results in Θ(n³) cache misses for an n × n matrix multiplication, as each of the n² elements of the output C requires n scattered reads from B. Similarly, accesses to rows of A suffer from limited reuse, exacerbating compulsory and capacity misses across L1 and L2 caches. To mitigate these issues, cache-optimized iterative algorithms employ blocking (or tiling), which partitions the input matrices A, B, and output C into smaller square blocks of size b × b, chosen such that each block fits within the target cache level (typically L1 or L2). The algorithm proceeds by iterating over blocks of indices: outer loops over block indices for rows of C (i-blocks) and columns of C (j-blocks), with an inner loop over the k-blocks for the accumulation. Within each block triple, a standard three-nested loop computes the submatrix product, ensuring that the three b × b blocks reside in fast memory during the computation, thereby promoting reuse and reducing transfers to slower memory levels. This approach transforms the dominant Θ(n³) misses into a more favorable profile, approximately O(n³ / b² + n² · (b² / S)), where S is the cache size, by loading each block O((n/b)³) times but amortizing the cost over b² elements with high reuse. The following pseudocode illustrates the structure for square n × n matrices assuming perfect divisibility by b for simplicity:
for (int ii = 0; ii < n; ii += b) {
    for (int jj = 0; jj < n; jj += b) {
        for (int kk = 0; kk < n; kk += b) {
            // Compute C[ii:ii+b, jj:jj+b] += A[ii:ii+b, kk:kk+b] * B[kk:kk+b, jj:jj+b]
            for (int i = ii; i < ii + b; i++) {
                for (int j = jj; j < jj + b; j++) {
                    double r = 0.0;  // Accumulator for partial row product
                    for (int k = kk; k < kk + b; k++) {
                        r += A[i][k] * B[k][j];
                    }
                    C[i][j] += r;
                }
            }
        }
    }
}
The optimal block size b is selected empirically or analytically as approximately √(S / e), where S is the cache size in bytes and e is the element size (e.g., 8 bytes for doubles); for a typical 32 KB , this yields b ≈ 64, though guidelines recommend testing values between 32 and 128 for and larger (256–512) for to balance overhead and reuse. On real systems, such blocking can reduce cache misses by factors of 10–100 depending on matrix size and hardware, leading to practical speedups of up to 10× over the unblocked naive algorithm for matrices larger than the cache. A key variant for further locality improvement is the ikj loop ordering, which reorders the inner loops to prioritize reuse of rows from A. In this scheme, the outermost loop fixes i (row of C and A), the middle loop fixes k (column of A and row of B), and the innermost loop accumulates over j (column of B and C); this ensures that for each fixed i and k, the entire row segment of A is reused across all j in the block while accessing contiguous elements of B[] and C[]. Implemented within the blocked framework, ikj maximizes temporal locality for A in L1 cache, often yielding an additional 2–4× speedup over ijk-ordered blocking on row-major storage, as demonstrated in high-performance BLAS implementations.

Divide-and-Conquer Algorithms

Strassen's Algorithm

Strassen's algorithm, introduced by in 1969, represents a seminal divide-and-conquer approach to that achieves subcubic , marking the first demonstration that the standard O(n³) bound is not optimal. Unlike the naive method, which requires eight multiplications for the product of 2×2 blocks, Strassen's method computes the same result using only seven multiplications through carefully chosen linear combinations of the input submatrices. This reduction enables recursive application to larger n×n matrices (where n is a power of 2), yielding significant asymptotic savings while introducing additional additions and subtractions. The algorithm partitions each n×n matrix A and B into four (n/2)×(n/2) submatrices: A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}, \quad B = \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix}, and similarly for the output C. It then computes seven intermediate products recursively: \begin{align*} M_1 &= (A_{11} + A_{22})(B_{11} + B_{22}), \\ M_2 &= (A_{21} + A_{22}) B_{11}, \\ M_3 &= A_{11} (B_{12} - B_{22}), \\ M_4 &= A_{22} (B_{21} - B_{11}), \\ M_5 &= (A_{11} + A_{12}) B_{22}, \\ M_6 &= (A_{21} - A_{11}) (B_{11} + B_{12}), \\ M_7 &= (A_{12} - A_{22}) (B_{21} + B_{22}). \end{align*} These are combined via additions and subtractions to form the submatrices of C: \begin{align*} C_{11} &= M_1 + M_4 - M_5 + M_7, \\ C_{12} &= M_3 + M_5, \\ C_{21} &= M_2 + M_4, \\ C_{22} &= M_1 - M_2 + M_3 + M_6. \end{align*} The correctness of these formulas follows from expanding the expressions and verifying they match the standard block multiplication rules, with the seven multiplications capturing all necessary terms while the 18 additions/subtractions handle the coefficients. The recurrence for the , assuming matrix addition takes Θ(n²) time, is T(n) = 7 T(n/2) + Θ(n²). By the (or solving via ), this yields T(n) = Θ(n^{\log_2 7}), where \log_2 7 ≈ 2.807, providing an asymptotic over O(n³). In practice, the overhead from and extra additions means Strassen's outperforms the naive approach only for matrices larger than a certain threshold, typically around n=64, depending on hardware and implementation. Despite its theoretical impact, Strassen's algorithm has limitations, including increased numerical instability from the additional additions and subtractions, which can amplify roundoff errors in compared to the standard method. Additionally, the recursive overhead makes it inefficient for small n, where the base case (e.g., direct multiplication) dominates.

Adaptations for Non-Square Matrices

Strassen's algorithm, while groundbreaking for square matrices, requires adaptations to handle the common case of multiplying an m × matrix by an × matrix, where the dimensions may not be equal or powers of 2. A straightforward approach involves zero-padding the matrices to the smallest square size k ≥ max(m, , ) that is a power of 2, allowing the recursive divide-and-conquer to proceed as in the square case. However, this padding incurs a computational cost of O(k^{\omega}), where \omega \approx 2.371 is the best-known exponent as of , potentially leading to overhead proportional to O(\max(m, , )^{\omega}) even when the product dimensions suggest a lower naive of O(m n p). This inefficiency is particularly pronounced for highly unbalanced dimensions, such as when m \ll \approx . To mitigate full padding overhead, techniques like lazy or dynamic peeling recurse directly on rectangular submatrices without embedding into larger squares, dividing each dimension independently and only computing non-zero contributions. For balanced rectangular cases where m, n, p are comparable and even, the recurrence for the time complexity T(m, n, p) follows T(m, n, p) = 7 T(m/2, n/2, p/2) + \Theta(m n + m p + n p), reflecting the seven recursive multiplications and the linear work for additions/subtractions across the three pairwise dimension products. Solving this yields T(m, n, p) = O((m n p)^{(\log_2 7)/3}), which aligns with the square case when m = n = p but adapts naturally to rectangles without artificial inflation from padding. When dimensions are unbalanced, such as m \ll n, the recursion effectively reduces toward the classical O(m n^2) complexity by limiting recursion depth in the smaller dimension. Early adaptations focused on specific rectangular forms to minimize multiplications. The Hopcroft-Kerr algorithm (1971) provides an efficient method for multiplying p × 2 by 2 × n matrices using \lceil(3 p n + \max(n, p))/2\rceil multiplications, generalizing Strassen-like savings to these thin rectangular cases without commutativity assumptions and serving as a base case for broader recursions. Later generalizations, such as those by Bini et al. (1979), extended the theoretical framework by introducing approximate bilinear algorithms that lower the exponent \omega for square multiplications, enabling tighter bounds for rectangular complexity via reductions like computing the m × n · n × p product in O(m n^{\omega-1} p) time under certain balancing, influencing subsequent rectangular optimizations. As an illustrative example, consider multiplying a 2 × 3 matrix A by a 3 × 4 matrix B to yield a 2 × 4 result C. Decompose A into two 2 × 2 blocks (with the last column of A split across them if needed) and B into a 2 × 4 top block and a 1 × 4 bottom block, but to apply a Strassen-like step, partition further: compute the product of the first two rows/columns using a 2 × 2 Strassen base (7 multiplications), then handle the remaining 2 × 1 · 1 × 4 strip naively (8 scalar multiplications), avoiding full 4 × 4 padding that would require 49 multiplications under Strassen recursion. The total remains close to the naive 24 multiplications but demonstrates selective recursion on viable subblocks. These adaptations trade some simplicity for practicality: while they save work on balanced rectangles (e.g., m ≈ n ≈ p/2) by preserving subcubic scaling, extreme imbalances like m = 1 revert to linear scans O(n p), aligning with optimal but losing recursive benefits; nonetheless, they outperform naive methods for moderately rectangular inputs in .

Advanced Sequential Algorithms

Subcubic Complexity Algorithms

Subcubic algorithms for seek to improve upon Strassen's exponent of \omega = \log_2 7 \approx 2.807 by constructing more efficient recursive decompositions of the matrix multiplication tensor, achieving O(n^\omega) for multiplying n \times n matrices over a , where \omega < 3. These methods build on algebraic techniques, such as bilinear maps and tensor bounds, to reduce the number of scalar multiplications required in the recursive step. The matrix multiplication exponent \omega is defined as the infimum of all \beta \geq 2 such that there exists an multiplying two n \times n matrices in O(n^\beta) arithmetic operations. A trivial lower bound of \omega \geq 2 arises because any must read $2n^2 input entries and produce n^2 output entries, requiring at least \Omega(n^2) operations in the worst case. Theoretical advances often involve reductions to problems like Boolean , where the goal is to compute the OR of logical products, or applications of over finite fields to bound tensor ranks. For instance, subcubic constructions frequently decompose the trilinear form underlying into sums of fewer rank-1 terms, enabling recursion with reduced exponent. The Coppersmith-Winograd algorithm from marked a significant milestone, achieving \omega \approx 2.376 through bilinear constructions based on arithmetic progressions in finite fields, which allow for a more efficient partitioning of matrix indices during . This approach generalized earlier ideas by exploiting geometric properties of progressions to lower the tensor's border rank. Subsequent refinements focused on analyzing higher powers of the Coppersmith-Winograd tensor to extract even tighter bounds. In 2014, François Le Gall introduced a method for evaluating powers of trilinear forms (tensors) using , applying it to the Coppersmith-Winograd tensor to obtain \omega < 2.37286. Known as the "laser method," this technique iteratively prunes suboptimal terms in tensor expansions via relaxations over finite fields, yielding incremental improvements without altering the core construction. Le Gall's framework has since become a for further exponent reductions. The historical progression of \omega reflects steady theoretical gains: starting from Strassen's 1969 result of \approx 2.807, improvements by (1978, \approx 2.796), Bini et al. (1979, \approx 2.780), and Coppersmith and Winograd (1981, \approx 2.496) paved the way for Coppersmith-Winograd's 2.376 (1990), followed by Le Gall's 2.37286 (2014), Duan et al. (2022, 2.371866), minor tweaks down to \approx 2.3728596 by 2020 (Alman and Williams), Williams, Xu, Xu, and Zhou (2024, <2.371552), and Alman, Duan, Williams, Xu, Xu, and Zhou (2024, <2.371339). Despite these advances, subcubic algorithms beyond Strassen remain practically irrelevant due to enormous hidden constants—often exceeding $10^{10}—arising from the intricate combinatorial searches and deep levels required. For matrix dimensions n < 10^6, optimized implementations of Strassen's algorithm or even the classical O(n^3) method outperform them, as the asymptotic benefits only emerge at impractically large scales, compounded by numerical instability in .

AlphaTensor and Modern Discoveries

In 2022, DeepMind introduced AlphaTensor, an system employing to discover novel algorithms for by decomposing the associated tensor into a sum of rank-1 tensors. The approach models the decomposition process as a single-player game, where the state represents the residual tensor to be factored, and actions correspond to selecting rank-1 components that reduce this residual toward zero, guided by a trained via and (MCTS). This methodology enables AlphaTensor to explore vast search spaces efficiently, yielding provably correct algorithms that outperform prior methods for various matrix dimensions. A key achievement of AlphaTensor is its discovery of a 4×4 matrix multiplication requiring only 47 scalar multiplications over the \mathbb{Z}_2, surpassing Strassen's , which uses 49 multiplications. For real-number matrices, AlphaTensor enables faster performance for dimensions up to n=96, achieving effective exponents approaching 2.37 and 10–20% speedup on standard hardware compared to prior methods. The system also extends to over finite fields, where it consistently finds superior decompositions, such as for 4×5 matrices requiring 76 multiplications over the reals (versus 80 previously). These findings integrate with classical techniques, forming hybrid that combine AI-discovered small-tensor decompositions with recursive strategies for broader applicability. Building on AlphaTensor, subsequent developments have leveraged advanced AI architectures for further refinements. In 2025, DeepMind's AlphaEvolve, a general-purpose coding agent powered by large language models like combined with evolutionary search, discovered a 4×4 complex-valued matrix multiplication algorithm using just 48 scalar multiplications, improving upon the longstanding Strassen bound of 49 and demonstrating potential for hardware-tailored optimizations. AlphaEvolve's approach disrupts the empirical landscape of subcubic algorithms by automating the search for low-rank tensor decompositions across domains. These advancements highlight AI's role in uncovering practical speedups for small matrices—such as 0.1% fewer operations for 4×4 cases—while paving the way for domain-specific variants, like those optimizing for modular fields in cryptographic applications.

Parallel and Distributed Algorithms

Shared-Memory Parallel Algorithms

Shared-memory parallel algorithms for leverage multi-core processors where all cores access a unified space, enabling thread-level parallelism without explicit between threads. These approaches typically build on sequential blocked algorithms by distributing computational work across threads using directives like those in , a standard for shared-memory programming. The goal is to achieve near-linear speedup relative to the number of cores while minimizing overhead from thread synchronization and access conflicts. A basic strategy involves parallelizing the outermost loop over the rows of the output C, where each computes an entire row block of C by iterating over the columns of A and B. This can be implemented using an pragma to fork threads at the start of the i-loop in the standard triple-loop formulation for C = AB, with each thread handling a contiguous block of rows. For square matrices of size n × n, this provides balanced work distribution when the number of threads divides n evenly, but load imbalance arises for non-square matrices or when dimensions are not multiples of the thread count, leading to idle threads and reduced efficiency. More advanced techniques apply parallelism to blocked variants of the algorithm, where the inner loops over blocks are parallelized to improve utilization and . In a blocked , the outer loops partition A, B, and C into blocks that fit in , and the accumulation (sum over k) can be parallelized across threads using a operation to sum partial results into shared blocks of C. This approach, often nested, allows multiple levels of parallelism: for example, parallelizing the block-level i- and then the inner j- within each block. for such a parallel blocked multiplication using might appear as follows:
for (i = 0; i < m; i += bs) {
    for (j = 0; j < n; j += bs) {
        #pragma omp parallel for reduction(+:C[i..i+bs-1][j..j+bs-1])
        for (k = 0; k < p; k += bs) {
            // Compute C[i:bs,j:bs] += A[i:bs,k:bs] * B[k:bs,j:bs]
            // Inner blocked loops here, potentially parallelized further
        }
    }
}
This structure enhances data locality by keeping blocks in shared caches longer, though the reduction introduces synchronization overhead. To address load imbalance from varying computation times per block—due to differing matrix dimensions or cache effects—dynamic scheduling can be employed in OpenMP, where work chunks are assigned to threads at runtime rather than statically. This adapts to imbalances, such as in hypermatrix representations where fixed partitioning leads to uneven loads on multi-core systems like 8-way SMPs with Intel Itanium processors. With appropriate chunk sizes (e.g., 2 for small matrices), dynamic scheduling achieves speedups of up to 3.7 on 4 cores and near 7.0 on 8 cores for large matrices. Scalability is strong up to p cores when matrix size n greatly exceeds p, yielding parallel efficiency around 80% on modern multi-core CPUs, as threads fully utilize compute resources without excessive idle time. High-performance libraries implement these principles to deliver near-peak floating-point operations per second () on multi-core systems. GotoBLAS, a widely adopted BLAS implementation, uses a hierarchical with threading on outer loops to minimize , achieving high by block sizes to hierarchies. Its successor, BLIS, extends this by exposing finer-grained loops around a customizable micro-kernel, enabling flexible multithreading across five loops (e.g., parallelizing row and column loops on for 1056 GFLOPS or PowerPC for 204.8 GFLOPS with perfect scaling to 64 threads). These libraries routinely attain 80-90% of theoretical peak performance for large matrices on multi-core platforms. Despite these advances, limitations persist for very large matrices, where contention on shared last-level caches degrades performance as multiple threads compete for , increasing without network involvement. This effect is pronounced on architectures with small per-core caches, like many-core systems, where data reuse drops and overhead rises.

Distributed and Communication-Avoiding Algorithms

Distributed-memory systems require matrix multiplication algorithms that partition data across multiple processors while minimizing inter-processor communication volume, as communication costs often dominate overall runtime due to and constraints. Cannon's algorithm, introduced in , is one of the earliest distributed algorithms for on a two-dimensional . It assumes p = q² processors arranged in a q × q , with input matrices A and B initially distributed in block form such that each holds an (n/q) × (n/q) submatrix. The algorithm starts with a skewing phase for matrix B to align elements for multiplication. This is followed by n/q rounds of local submatrix multiplications, each consisting of q shift-and-multiply steps where submatrices of A are shifted cyclically along processor rows and submatrices of B along columns. Each communicates O(n² / √p) words in total, achieving optimal for the model but incurring higher latency from the sequential nature of the shifts. The SUMMA (Scalable Universal Matrix Multiplication Algorithm), developed in the mid-1990s, improves upon Cannon's approach by replacing shifts with broadcasts to reduce latency while preserving optimality. Matrices A and B are distributed in a block-cyclic manner across p = q² processors, ensuring load balance. The algorithm proceeds in √p outer steps, where in each step a block row panel from A is broadcast along its row and a block column panel from B along its processor column; each then performs a local (general matrix multiply) on the received data. The total communication volume per is O(n² / √p), achieving optimality as per the lower bound. These 2D algorithms achieve communication optimality, as established by a lower bound of Ω(n² / √p) words per processor for any distributed matrix multiplication using O(n² / p) local memory, proven via the Loomis-Whitney inequality applied to the geometric constraints of the computation. For larger processor counts or hierarchical systems, 3D variants extend the distribution to a logical p = q³ processor grid, conceptually slicing the matrices into q layers along the third dimension. Local 2D multiplications occur within each layer using a SUMMA-like approach, followed by reductions across layers to accumulate results. This formulation reduces communication to O(n² / p^{2/3}) words per processor, matching the corresponding lower bound and proving advantageous in deep memory hierarchies where additional replication can further optimize bandwidth. Modern communication-avoiding techniques build on these foundations by incorporating randomized sampling to approximate intermediate computations or reduce constant factors in communication volume, as demonstrated in implementations achieving practical speedups on large-scale systems (Drake et al., 2013). More recent libraries like COSMA (as of 2023) provide communication-optimal distributed for arbitrary process grids, including GPU support, enhancing scalability on large-scale HPC systems.

Algorithms for Specific Topologies

Algorithms for matrix multiplication on specific network topologies focus on optimizing communication patterns to match the interconnect structure, such as meshes or hypercubes, where routing delays play a critical role in performance. These approaches adapt general distributed methods to fixed topologies, accounting for path lengths and broadcast efficiencies absent in more flexible models. In two-dimensional mesh topologies, processors are arranged in a \sqrt{p} \times \sqrt{p} , enabling block-decomposed with localized data movement. Fox's , introduced in , distributes submatrices across the and performs iterative row broadcasts for matrix A subblocks horizontally and column shifts for matrix B subblocks vertically, ensuring each computes its contribution through local multiplications and reductions. The broadcasts propagate along paths, incurring delays proportional to the network of O(\sqrt{p}), leading to an overall of O(n^3 / p + n^2 \sqrt{p}) dominated by communication for large p. This method balances load while minimizing redundant transfers, though it requires at each iteration. Hypercube topologies, with p = 2^d processors in d dimensions, support variants of Cannon's algorithm that leverage the structure's logarithmic for efficient data shifts. In these adaptations, bitonic broadcasts facilitate the circular shifts of submatrices, distributing rows and columns across subcubes in O(d) steps per multiplication phase by recursively merging along dimensions. This reduces the per-step communication from mesh-like O(\sqrt{p}) to O(\log p), yielding a total of O(n^3 / p + n^2 d), where the additive term captures the aggregated shift costs over n iterations. Such implementations embed the logical grid into the , preserving nearest-neighbor semantics while exploiting shortcuts for up to thousands of processors. Toroidal meshes incorporate wraparound links in both dimensions, effectively halving the to O(\sqrt{p}/2) compared to non-toroidal variants and reducing broadcast latencies for repeated data motions. This topology proved advantageous in early large-scale SIMD systems like the CM-2, where algorithms combined local computations with shifts to achieve near-optimal throughput for dense matrix products on $16 \mathrm{K} to $64 \mathrm{K} processors. These algorithms emerged in the amid the rise of processors (MPPs), such as hypercube-based systems and mesh-connected SIMD arrays, to address bandwidth-limited interconnects in early supercomputers. Their principles retain relevance today in hardware with regular topologies, including / thread grids on GPUs and systolic arrays in FPGAs, where mesh-like routing optimizes tiled multiplications for workloads. A key in topology-specific designs is elevated from diameter-dependent —e.g., O(\sqrt{p}) hops in meshes versus O(\log p) in hypercubes—compared to all-reduce-capable networks, yet they enable cost-effective scaling on commodity regular interconnects without custom hardware.

References

  1. [1]
    Matrix Algebra - Computer Science
    What is the computational complexity of matrix multiplication? For two n×n matrices, consider the definition that uses inner product:.
  2. [2]
    Explained: Matrices | MIT News
    Dec 6, 2013 · Many techniques for filtering or compressing digital audio signals, such as the Fourier transform, rely on matrix multiplication. Another reason ...
  3. [3]
    [PDF] 1 Motivation for Improving Matrix Multiplication - People @EECS
    Matrix multiplication is important enough that computer vendors often supply versions optimized for their machines, as part of a library It may also be.
  4. [4]
    Fast Matrix Multiplication: Theory of Computing: An Open Access ...
    Dec 24, 2013 · We give an overview of the history of fast algorithms for matrix multiplication. Along the way, we look at some other fundamental problems in algebraic ...
  5. [5]
    Gaussian elimination is not optimal
    Dec 12, 2024 · STRASSEN: Gaussian Elimination is not Optimal where the Aik, C~k are matrices of order m2 k. Then compute. I =A~-x x,. II =A2tI,. III = IAI ...
  6. [6]
    Discovering faster matrix multiplication algorithms with ... - Nature
    Oct 5, 2022 · Here we report a deep reinforcement learning approach based on AlphaZero 1 for discovering efficient and provably correct algorithms for the multiplication of ...Missing: survey | Show results with:survey
  7. [7]
    Jacques Binet (1786 - 1856) - Biography - MacTutor
    ... Jacques Philippe Marie Binet (the subject of this biography, 1786-1856) ... He discovered the rule for multiplying matrices in 1812 and it is almost ...
  8. [8]
    [PDF] A Brief History of Linear Algebra and Matrix Theory
    Matrix algebra was nurtured by the work of Arthur Cayley in 1855. Cayley studied compositions of linear transformations and was led to define matrix ...
  9. [9]
    [PDF] Strassen's algorithm Matrix multiplication
    Here is a pseudocode version of the algorithm. (I ... Strassen's algorithm:Matrix multiplication. The standard method of matrix multiplication of two n x n.
  10. [10]
    [PDF] Divide-and-conquer algorithms - People @EECS
    The preceding formula implies an O(n3) algorithm for matrix multiplication: there are n2 entries to be computed, and each takes O(n) time.<|control11|><|separator|>
  11. [11]
    [PDF] The Cache Performance and Optimization of Blocked Algorithms
    To validate our model, we compare the predicted cache misses to actual cache misses for a blocked matrix multiplication across different cache sizes and ...Missing: seminal | Show results with:seminal
  12. [12]
    [PDF] Anatomy of High-Performance Matrix Multiplication
    Goto, K. 2005. www.tacc.utexas.edu/resources/software/. Goto, K. and Gunnels, J. A. Anatomy of high-performance matrix multiplication: Low-level details. in ...
  13. [13]
    Gaussian elimination is not optimal | Numerische Mathematik
    Gaussian elimination is not optimal. Published: August 1969. Volume 13, pages 354–356, (1969); Cite this article. Download PDF ... Cite this article. Strassen, V.<|separator|>
  14. [14]
    [PDF] Using Strassen's Algorithm to Accelerate the Solution of Linear ...
    The fact that matrix multiplication can be performed with fewer than 2n³ arithmetic opera- tions has been known since 1969, when V. Strassen [1969] published an ...
  15. [15]
    [PDF] Making Strassen Matrix Multiplication Safe - NUS Computing
    Because Strassen's algorithm performs fewer than n3 multiplications, it cannot satisfy component- wise numerical stability.
  16. [16]
    [PDF] A Framework for Practical Parallel Fast Matrix Multiplication - OSTI
    Sep 5, 2014 · Our new fast algorithms also outperform Strassen's algorithm on the multiplication of non-square matrices. • We demonstrate that, in order to ...
  17. [17]
    On Minimizing the Number of Multiplications Necessary for Matrix ...
    In this paper we give a new algorithm for matrix multiplication which for n large uses n 2 + o ⁡ ( n 2 ) multiplications to multiply n × p matrices by p × n ...
  18. [18]
    [PDF] An overview of the recent progress on matrix multiplication
    A trivial lower bound on ω is 2, and the best known upper bound until recently was ω < 2.376 achieved by Coppersmith and Winograd in 1987. There were two ...
  19. [19]
    [PDF] Fast matrix multiplication is stable - CS@Cornell
    Dec 6, 2006 · Therefore, the bit-complexity of computing the answer with error proportional to ǫ0 will be at most a polylog(n) factor larger than the bound O( ...
  20. [20]
    [PDF] Limits on the Universal Method for Matrix Multiplication
    Sep 3, 2021 · Progress on this problems is measured by giving bounds on ω, the exponent of matrix multiplication, ... lower bound better than 2 when q = 2.
  21. [21]
    [1401.7714] Powers of Tensors and Fast Matrix Multiplication - arXiv
    Jan 30, 2014 · This paper presents a method to analyze the powers of a given trilinear form (a special kind of algebraic constructions also called a tensor)
  22. [22]
    [PDF] Powers of Tensors and Fast Matrix Multiplication - Simons Institute
    Nov 12, 2014 · Le Gall (2014). 32 ω < 2.3728639. 373. Le Gall (2014) analysis of the m-th power of the tensor by CW ω < 2.48. 1986 Strassen ω < 2.376. 1987 ...
  23. [23]
    [PDF] Is Fast Matrix Multiplication of Practical Use?
    In addition to a lower exponent, the method has a reasonable constant: Strassen showed that his method requires at most 4.7nlog27 arithmetic operations. with ...
  24. [24]
    How fast can we *really* multiply matrices? - MathOverflow
    Jul 6, 2012 · There are currently no practical implementations of any fast matrix multiplication algorithms besides Strassen's. The Coppersmith/Winograd ...Missing: subcubic | Show results with:subcubic
  25. [25]
    Discovering novel algorithms with AlphaTensor - Google DeepMind
    Oct 5, 2022 · We introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks ...
  26. [26]
    A Gemini-powered coding agent for designing advanced algorithms
    May 14, 2025 · AlphaEvolve is accelerating AI performance and research velocity. By finding smarter ways to divide a large matrix multiplication operation into ...
  27. [27]
    [PDF] OpenMP - shared memory parallelism
    Case study: sparse matrix multiplication. 49. The tricky ... Let us now concentrate on the loops for the off-diagonals and make it parallel using OpenMP.
  28. [28]
    [PDF] A Study on Load Imbalance in Parallel Hypermatrix Multiplication ...
    In this paper we present our work on the the parallelization of a matrix multiplication code based on the hypermatrix data structure. We have used OpenMP for ...
  29. [29]
    [PDF] Parallel Algorithms 1 Matrix Multiplication - UMD Computer Science
    This approach improves cache utilization and reduces cache misses by ensuring that data used in computations is more likely to be held in the cache. The lecture ...
  30. [30]
    [PDF] Lecture 10: Introduction to OpenMP (Part 1)
    OpenMP parallel Region Directive. #pragma omp parallel [clause list]. Typical ... // static scheduling of matrix multiplication loops. #pragma omp ...
  31. [31]
    Modern multicore and manycore architectures - ScienceDirect.com
    8 presents scalability results plotted as parallel speedup (performance on p ... 80% efficiency on the multicore CPUs and 90% on the coprocessor. Flux ...
  32. [32]
    [PDF] Anatomy of High-Performance Matrix Multiplication
    Implementing matrix multiplication so that near-optimal performance is attained requires a thorough understanding of how the operation must be layered at the ...
  33. [33]
    [PDF] Anatomy of High-Performance Many-Threaded Matrix Multiplication
    BLIS extends GotoBLAS by adding two loops within the inner kernel, enabling finer parallelism and multithreading for matrix multiplication.
  34. [34]
    [PDF] SUMMA: Scalable Universal Matrix Multiplication Algorithm
    In this paper, we give a straight forward, highly efficient, scalable implementation of common matrix multipli- cation operations. The algorithms are much ...
  35. [35]
    [PDF] Communication Lower Bounds for Distributed-Memory Matrix ...
    We present lower bounds on the amount of communication that matrix- multiplication algorithms must perform on a distributed-memory parallel computer.Missing: Tishby 2004
  36. [36]
    [PDF] Parallel Matrix Multiplication: 2D and 3D - UT Computer Science
    Jun 11, 2012 · This paper extends SUMMA to 3D process grids for parallel matrix multiplication, using element-wise distributions and allgather-based ...
  37. [37]
    [PDF] Avoiding Communication in Dense Linear Algebra - Berkeley EECS
    Aug 16, 2013 · In many cases, the most commonly used algorithms are not communication optimal, and we can develop new algo- rithms that require less data ...
  38. [38]
    [PDF] Scalability of Parallel Algorithms for Matrix Multiplication
    We analyze the performance of various parallel formulations of the matrix multiplication algorithm for di erent matrix sizes and number of processors, and ...Missing: efficiency | Show results with:efficiency
  39. [39]
    Matrix algorithms on a hypercube I: Matrix multiplication
    February 1987, Pages 17 ... We discuss algorithms for matrix multiplication on a concurrent processor containing a two-dimensional mesh or richer topology.Missing: 2D | Show results with:2D
  40. [40]
  41. [41]
    [PDF] Understanding the Efficiency of GPU Algorithms for Matrix-Matrix ...
    We benchmarked our GPU algorithms and the CPU based matrix-matrix multiplication routine (sgemm) provided by ATLAS. Experiments were performed on a 3 GHz ...