Fact-checked by Grok 2 weeks ago

Matrix multiplication

Matrix multiplication is a binary operation in linear algebra that combines two matrices of compatible dimensions to yield a third matrix, where each entry in the resulting matrix is computed as the sum of the products of elements from a row of the first matrix and a column of the second. Specifically, for an m × n matrix A and an n × p matrix B, their product C = AB is an m × p matrix with entries c_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj} for i = 1 to m and j = 1 to p. This operation generalizes the dot product of vectors and corresponds to the composition of linear transformations represented by the matrices. Unlike scalar multiplication, matrix multiplication is not commutative—AB generally differs from BA—but it is associative, meaning (AB)C = A(BC), and distributive over matrix addition, so A(B + C) = AB + AC and (A + B)C = AC + BC. These properties make matrix multiplication a cornerstone of abstract algebra, forming a semigroup under the operation while enabling efficient computations in vector spaces. The standard algorithm requires O(n^3) arithmetic operations for n × n matrices, though faster methods like Strassen's algorithm reduce this to O(n^{2.807}) by recursively dividing matrices into blocks and exploiting algebraic identities to minimize multiplications. The concept emerged in the mid-19th century, with Arthur Cayley formalizing matrix algebra, including multiplication, in his 1858 paper, building on earlier work by mathematicians like James Joseph Sylvester on linear transformations. Since then, matrix multiplication has become indispensable across disciplines: in physics and engineering for modeling systems of differential equations; in computer graphics for transformations like rotations and scaling; and in economics for input-output analysis. In computer science, it underpins numerical simulations, optimization algorithms, and parallel computing frameworks. Particularly in machine learning, matrix multiplication drives core operations in neural networks, such as forward propagation where weight matrices are multiplied by input vectors or feature matrices, enabling scalable training of deep learning models on vast datasets. Advances in efficient implementations, including hardware-optimized libraries like BLAS, have accelerated these applications, making matrix multiplication a bottleneck and focal point for performance improvements in modern computing.

Notation and Definitions

Notation

In linear algebra, a matrix is typically denoted by an uppercase letter, such as A, and represented as A = (a_{ij}), where a_{ij} denotes the entry in the i-th row and j-th column. An m \times n matrix A thus consists of m rows and n columns, forming a rectangular array of scalars. The product of two matrices A and B, denoted C = AB, is defined when A is an m \times n matrix and B is an n \times p matrix, resulting in an m \times p matrix C = (c_{ij}). Each entry is given by the formula c_{ij} = \sum_{k=1}^n a_{ik} b_{kj}, which represents the dot product of the i-th row of A and the j-th column of B. Scalar multiplication of a matrix A by a scalar k produces kA = (k a_{ij}), where every entry is scaled by k. Vectors are treated as special cases of matrices: a column vector is an n \times 1 matrix, while a row vector is a $1 \times n matrix.

Matrix-Matrix Product

The product of two matrices A and B is defined only when the number of columns of A equals the number of rows of B, ensuring dimension compatibility. Let A be an m \times n matrix with entries a_{ik} and B an n \times p matrix with entries b_{kj}. Their product C = AB is then an m \times p matrix whose entries c_{ij} are given by the formula c_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj}. This entry-wise rule computes each c_{ij} as the dot product of the i-th row of A and the j-th column of B. The formula arises naturally from the representation of linear transformations. A matrix A defines a linear map from \mathbb{R}^n to \mathbb{R}^m, sending the standard basis vector e_j to the j-th column of A. Similarly, B maps from \mathbb{R}^p to \mathbb{R}^n. The composition A \circ B maps from \mathbb{R}^p to \mathbb{R}^m, and its matrix is AB, where the j-th column of AB is A applied to the j-th column of B. Thus, the (i,j)-th entry c_{ij} is the i-th component of this image, yielding the summation formula above. This perspective underscores why the inner dimensions must match for the maps to compose properly. To illustrate, consider the $2 \times 2 matrices A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, \quad B = \begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix}. The product C = AB has entries computed as follows: c_{11} = 1 \cdot 5 + 2 \cdot 7 = 19, c_{12} = 1 \cdot 6 + 2 \cdot 8 = 22, c_{21} = 3 \cdot 5 + 4 \cdot 7 = 43, and c_{22} = 3 \cdot 6 + 4 \cdot 8 = 50. Thus, C = \begin{pmatrix} 19 & 22 \\ 43 & 50 \end{pmatrix}. Each entry results from a row-column dot product, demonstrating the general rule for compatible square matrices. The explicit rule for matrix multiplication was first described in print by Arthur Cayley in his 1858 memoir on matrices, building on earlier work by Binet and Cauchy in 1812 related to determinants and linear maps.

Matrix-Vector Products

In linear algebra, the multiplication of an m \times n matrix A by an n \times 1 column vector \mathbf{x} yields an m \times 1 column vector A\mathbf{x}, which represents the linear combination of the columns of A weighted by the corresponding entries of \mathbf{x}. If A has columns \mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_n, then A\mathbf{x} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \dots + x_n \mathbf{a}_n. This operation aligns with the general rule for matrix products, where the inner dimensions must match for compatibility. The explicit formula for the entries of the product is (A\mathbf{x})_i = \sum_{j=1}^n a_{ij} x_j for each row index i = 1, 2, \dots, m. This summation computes each component of the resulting vector as a weighted sum, emphasizing the role of matrix-vector products in solving linear systems and representing transformations. Vectors are treated as special matrices in this context, with column vectors as n \times 1 matrices and row vectors as $1 \times n matrices. For the row vector case, the product of a $1 \times n row vector \mathbf{y} and an n \times p matrix A produces a $1 \times p row vector \mathbf{y}A, which is the linear combination of the rows of A weighted by the entries of \mathbf{y}. If A has rows \mathbf{a}^T_1, \mathbf{a}^T_2, \dots, \mathbf{a}^T_n, then \mathbf{y}A = y_1 \mathbf{a}^T_1 + y_2 \mathbf{a}^T_2 + \dots + y_n \mathbf{a}^T_n. This form is particularly useful in contexts like Markov chains and statistical modeling.

Vector-Vector Product

The vector-vector product, commonly referred to as the dot product, arises as a special case of matrix multiplication when one vector is interpreted as a row vector (a 1×n matrix) and the other as a column vector (an n×1 matrix). The dimensions are compatible for multiplication, yielding a 1×1 matrix, or scalar, as the output. Formally, for vectors \mathbf{u} = (u_1, u_2, \dots, u_n) and \mathbf{v} = (v_1, v_2, \dots, v_n) in \mathbb{R}^n, the dot product is defined as \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i, which corresponds to the single entry in the resulting matrix product \mathbf{u}^T \mathbf{v}./04:_R/4.07:_The_Dot_Product) This operation is bilinear, meaning it is linear in each argument separately, and produces a scalar that quantifies the alignment between the vectors. A key property of the dot product is its symmetry: \mathbf{u} \cdot \mathbf{v} = \mathbf{v} \cdot \mathbf{u}, which follows directly from the commutative property of multiplication in the summation and holds because the output is a scalar. For example, consider \mathbf{u} = \begin{pmatrix} 1 & 2 \end{pmatrix} as a row vector and \mathbf{v} = \begin{pmatrix} 3 \\ 4 \end{pmatrix} as a column vector; their product is $1 \cdot 3 + 2 \cdot 4 = 11./04:_R/4.07:_The_Dot_Product) In the context of vector spaces, the dot product defines the standard inner product on \mathbb{R}^n, enabling concepts such as orthogonality and norms within Euclidean spaces.

Illustrations

Geometric Illustration

Matrix multiplication can be geometrically interpreted as applying a linear transformation to vectors in Euclidean space, where the columns of the matrix specify the images of the standard basis vectors under the transformation. In two dimensions, this visualization aids in understanding how matrices distort, rotate, or scale the plane while preserving the origin and linear structure. A classic example is the rotation matrix, which rotates vectors counterclockwise by an angle \theta around the origin: \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} For \theta = 90^\circ, the matrix simplifies to \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}. Applied to the unit basis vectors, the standard \mathbf{e}_1 = (1, 0) maps to (0, 1), and \mathbf{e}_2 = (0, 1) maps to (-1, 0). Geometrically, this transformation rotates the entire plane by 90 degrees: before multiplication, the basis vectors align with the positive x- and y-axes; after, the x-axis vector points along the positive y-axis, and the y-axis vector points along the negative x-axis, forming a right angle at the origin with the plane rotated accordingly. Shearing and scaling provide further illustrations of non-rigid transformations. Consider the shear matrix \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, which shears the plane parallel to the x-axis. Applied to the basis vectors, \mathbf{e}_1 = (1, 0) remains (1, 0), while \mathbf{e}_2 = (0, 1) maps to (1, 1). Visually, before the transformation, the basis forms perpendicular axes; after, the x-axis stays fixed, but the y-axis tilts to the line from the origin to (1, 1), distorting a unit square into a parallelogram slanted to the right. For scaling, a diagonal matrix like \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix} stretches the x-direction by a factor of 2 while leaving the y-direction unchanged: \mathbf{e}_1 becomes (2, 0), and \mathbf{e}_2 remains (0, 1), elongating the unit square into a rectangle twice as wide. These transformations, realized through matrix-vector multiplication, inherently preserve the origin since multiplying any matrix by the zero vector yields the zero vector, and they maintain linearity by preserving vector addition and scalar multiplication, ensuring straight lines map to straight lines and the origin remains fixed.

Numerical Examples

To illustrate matrix multiplication, consider the product of a $2 \times 3 matrix A and a $3 \times 2 matrix B, which yields a $2 \times 2 matrix C = AB. Let A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}, \quad B = \begin{bmatrix} 7 & 8 \\ 9 & 10 \\ 11 & 12 \end{bmatrix}. Each entry c_{ij} of C is the dot product of the i-th row of A and the j-th column of B. For c_{11}, c_{11} = 1 \cdot 7 + 2 \cdot 9 + 3 \cdot 11 = 7 + 18 + 33 = 58. Similarly, c_{12} = 1 \cdot 8 + 2 \cdot 10 + 3 \cdot 12 = 8 + 20 + 36 = 64, c_{21} = 4 \cdot 7 + 5 \cdot 9 + 6 \cdot 11 = 28 + 45 + 66 = 139, c_{22} = 4 \cdot 8 + 5 \cdot 10 + 6 \cdot 12 = 32 + 50 + 72 = 154. Thus, C = \begin{bmatrix} 58 & 64 \\ 139 & 154 \end{bmatrix}. Matrix-vector products can also yield scalars in special cases, analogous to a double dot product but reducing to the standard dot product. Consider a $1 \times 3 row vector (treated as a matrix) multiplied by a $3 \times 1 column vector: \begin{bmatrix} 1 & 2 & 3 \end{bmatrix} \begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix} = 1 \cdot 4 + 2 \cdot 5 + 3 \cdot 6 = 4 + 10 + 18 = 32, resulting in the $1 \times 1 scalar matrix {{grok:render&&&type=render_inline_citation&&&citation_id=32&&&citation_type=wikipedia}}. Matrix multiplication requires compatible dimensions: the number of columns in the first matrix must equal the number of rows in the second. For example, attempting to multiply a $2 \times 3 matrix by a $2 \times 4 matrix fails because the inner dimensions (3 and 2) do not match, making the operation undefined. As a verification, the $1 \times 1 case reduces to scalar multiplication: \cdot = [ab], where the single entry is the product of the scalars.

Properties

Distributivity and Scalar Multiplication

Matrix multiplication distributes over matrix addition in both directions. Specifically, for matrices A, B, and C of compatible dimensions, A(B + C) = AB + AC and (A + B)C = AC + BC. This property mirrors the distributivity in scalar arithmetic and follows directly from the definition of matrix multiplication as a sum of entry-wise products. To verify the left distributivity A(B + C) = AB + AC, consider the (i,j)-th entry of each side, assuming A is n \times k and B, C are k \times r. The left side entry is [A(B + C)]_{ij} = \sum_{p=1}^k A_{ip} (B_{pj} + C_{pj}) = \sum_{p=1}^k (A_{ip} B_{pj} + A_{ip} C_{pj}), using the distributivity of scalar multiplication over addition. The right side entry is [(AB + AC)]_{ij} = [AB]_{ij} + [AC]_{ij} = \sum_{p=1}^k A_{ip} B_{pj} + \sum_{p=1}^k A_{ip} C_{pj}, which matches the left side. The right distributivity (A + B)C = AC + BC follows analogously by expanding entries and applying scalar distributivity. Matrix multiplication also interacts compatibly with scalar multiplication. For a scalar k and compatible matrices A and B, (kA)B = k(AB) = A(kB). This holds because scalar multiplication scales each entry of a matrix uniformly. The entry-wise verification for (kA)B = k(AB) assumes A is n \times k and B is k \times r. The left side entry is [(kA)B]_{ij} = \sum_{p=1}^k (k A_{ip}) B_{pj} = k \sum_{p=1}^k A_{ip} B_{pj} = k [AB]_{ij}, using scalar distributivity over summation. The equality k(AB) = A(kB) follows similarly by scaling the entries of B. For example, let A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, \quad B = \begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix}, \quad C = \begin{pmatrix} 9 & 10 \\ 11 & 12 \end{pmatrix}. Then B + C = \begin{pmatrix} 14 & 16 \\ 18 & 20 \end{pmatrix}, and AB = \begin{pmatrix} 19 & 22 \\ 43 & 50 \end{pmatrix}, \quad AC = \begin{pmatrix} 31 & 34 \\ 71 & 78 \end{pmatrix}, \quad AB + AC = \begin{pmatrix} 50 & 56 \\ 114 & 128 \end{pmatrix}. Direct computation yields A(B + C) = \begin{pmatrix} 50 & 56 \\ 114 & 128 \end{pmatrix}, confirming distributivity. These properties hold when the entries are from a field, such as the real or complex numbers, where addition and multiplication satisfy the necessary axioms.

Non-commutativity

Unlike scalar multiplication, matrix multiplication is not commutative in general: for two compatible square matrices A and B, the product AB typically differs from BA. This non-commutativity is illustrated by the following 2×2 matrices: A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, \quad B = \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}. The product AB is AB = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}, while BA yields BA = \begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix}. Since AB \neq BA, these matrices provide a simple counterexample to commutativity. Exceptions occur when one matrix is the identity matrix (which commutes with any compatible matrix), or a scalar multiple of the identity, or when both matrices are diagonal (in the same basis). The order-dependence of matrix multiplication has significant implications in physics, particularly in quantum mechanics, where Werner Heisenberg's 1925 introduction of non-commuting matrix representations for observables formed the basis of matrix mechanics and highlighted the fundamental role of non-commutativity in quantum theory.

Associativity

Matrix multiplication is associative, meaning that for matrices A, B, and C of compatible dimensions, (AB)C = A(BC). This property holds because the entry-wise definition of the product aligns summations in a way that is independent of grouping. To prove this, consider the (i,j)-th entry of (AB)C. It is given by [(AB)C]_{ij} = \sum_{k=1}^p (AB)_{ik} C_{kj} = \sum_{k=1}^p \left( \sum_{m=1}^n A_{im} B_{mk} \right) C_{kj}, where A is r \times n, B is n \times p, and C is p \times s. By the associativity and commutativity of summation over real (or complex) numbers, this equals \sum_{m=1}^n A_{im} \left( \sum_{k=1}^p B_{mk} C_{kj} \right) = [A(BC)]_{ij}, the (i,j)-th entry of A(BC). Thus, the matrices are equal. For an explicit numerical example, let A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, \quad B = \begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix}, \quad C = \begin{pmatrix} 9 & 10 \\ 11 & 12 \end{pmatrix}. First, compute AB = \begin{pmatrix} 19 & 22 \\ 43 & 50 \end{pmatrix}, then (AB)C = \begin{pmatrix} 413 & 454 \\ 937 & 1030 \end{pmatrix}. Alternatively, BC = \begin{pmatrix} 111 & 122 \\ 151 & 166 \end{pmatrix}, and A(BC) = \begin{pmatrix} 413 & 454 \\ 937 & 1030 \end{pmatrix}, confirming equality. In chains of matrix multiplications, such as A_1 A_2 \dots A_k, associativity allows different parenthesizations, but the computational cost in terms of floating-point operations varies depending on the order. For instance, multiplying square matrices of order n naively costs O(n^3) per product, so grouping larger intermediate results first can reduce total operations compared to sequential pairing. Associativity underpins similarity transformations, where for an invertible matrix P, the matrix P^{-1} A P represents the linear transformation corresponding to A with respect to a change of basis given by the columns of P. This preserves eigenvalues and other intrinsic properties of the transformation.

Transpose and Conjugate Properties

One fundamental property of the transpose operation in matrix multiplication is that the transpose of a product reverses the order of the factors: for compatible matrices A and B, (AB)^T = B^T A^T. This reversal highlights the non-commutativity of matrix multiplication, as the transpose effectively "flips" the roles of rows and columns in the product. To verify this property entrywise, consider the (i,j)-entry of (AB)^T. By definition, this is the (j,i)-entry of AB, given by \sum_k a_{jk} b_{ki}. The (i,j)-entry of B^T A^T is instead \sum_k (B^T)_{ik} (A^T)_{kj} = \sum_k b_{ki} a_{jk}, which matches the expression above after reindexing the summation. Thus, the entries coincide, confirming the property. For matrices over the complex numbers, the entrywise complex conjugate operation—denoted \overline{A} or A^*, where each entry a_{ij} is replaced by its complex conjugate \overline{a_{ij}}—preserves the order in products: (\overline{AB}) = \overline{A} \, \overline{B}. This follows directly from the bilinearity of matrix multiplication, as conjugation distributes over addition and scalar multiplication, and the conjugate of a scalar product is the product of the conjugates. In contrast, the Hermitian adjoint (or conjugate transpose), denoted A^\dagger = \overline{A}^T, reverses the order: for compatible complex matrices A and B, (AB)^\dagger = B^\dagger A^\dagger. This operation combines transposition and entrywise conjugation, inheriting the reversal from the transpose while accounting for complex entries. As an illustrative example, consider the 2×2 complex matrices A = \begin{pmatrix} 1+i & 2 \\ 3 & 4-i \end{pmatrix} and B = \begin{pmatrix} i & 1 \\ 2-i & 3 \end{pmatrix}. Their product is AB = \begin{pmatrix} 3-i & 7+i \\ 7-3i & 15-3i \end{pmatrix}. The Hermitian adjoint of AB is (AB)^\dagger = \begin{pmatrix} 3+i & 7+3i \\ 7-i & 15+3i \end{pmatrix}. Computing separately, A^\dagger = \begin{pmatrix} 1-i & 3 \\ 2 & 4+i \end{pmatrix} and B^\dagger = \begin{pmatrix} -i & 2+i \\ 1 & 3 \end{pmatrix}, then B^\dagger A^\dagger = \begin{pmatrix} 3+i & 7+3i \\ 7-i & 15+3i \end{pmatrix}, matching (AB)^\dagger and confirming the reversal property.

Applications

Linear Transformations

In linear algebra, an m \times n matrix A represents a linear transformation from the vector space \mathbb{R}^n to \mathbb{R}^m. The matrix-vector product Ax, where x \in \mathbb{R}^n, computes the coordinates of the image of x under this transformation with respect to the standard bases. Matrix multiplication corresponds to the composition of such linear transformations. Suppose B is a p \times q matrix representing a linear map from \mathbb{R}^q to \mathbb{R}^p, and A is an m \times p matrix representing a map from \mathbb{R}^p to \mathbb{R}^m. Then the product AB is an m \times q matrix whose action on a vector x \in \mathbb{R}^q satisfies (AB)x = A(Bx), which is the composition of the two maps, denoted A \circ B. This equivalence holds because linear transformations preserve vector addition and scalar multiplication, and matrix multiplication encodes these operations accordingly. A prominent geometric example is rotation in the plane. The linear transformation that rotates vectors in \mathbb{R}^2 counterclockwise by an angle \theta around the origin is represented by the rotation matrix \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}. Applying this matrix to the standard basis vectors yields the images (\cos \theta, \sin \theta) and (-\sin \theta, \cos \theta), confirming the rotation. The composition of two rotations, first by \phi and then by \theta, is represented by the matrix product of the corresponding rotation matrices, resulting in a single rotation by \theta + \phi. This multiplicative structure simplifies the analysis of successive geometric transformations, such as in computer graphics and physics simulations.

Systems of Linear Equations

In linear algebra, a system of m linear equations in n unknowns can be compactly represented in matrix form as Ax = b, where A is the m \times n coefficient matrix whose entries are the coefficients of the variables, x is the n \times 1 column vector of unknown variables, and b is the m \times 1 column vector of constant terms on the right-hand side of the equations. This formulation leverages matrix-vector multiplication, where the i-th entry of Ax is the dot product of the i-th row of A with the vector x, equaling the i-th entry of b. Solving such systems often involves Gaussian elimination, a method that systematically simplifies the augmented matrix [A \mid b] through row operations to reach row echelon form. These row operations correspond to left-multiplication of the augmented matrix by elementary matrices, which are identity matrices modified by a single row operation, preserving the solution set while transforming A into an upper triangular matrix U such that Ux = c for some modified c. Back-substitution then yields the solution from this triangular system. For example, consider the $2 \times 2 system \begin{cases} 2x + 3y = 8 \\ 4x + 5y = 14 \end{cases} with coefficient matrix A = \begin{pmatrix} 2 & 3 \\ 4 & 5 \end{pmatrix} and b = \begin{pmatrix} 8 \\ 14 \end{pmatrix}. The solution is x = 1, y = 2. To verify, compute Ax: A \begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 2 & 3 \\ 4 & 5 \end{pmatrix} \begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 2 \cdot 1 + 3 \cdot 2 \\ 4 \cdot 1 + 5 \cdot 2 \end{pmatrix} = \begin{pmatrix} 8 \\ 14 \end{pmatrix} = b.

Dot Products and Forms

Matrix multiplication induces bilinear forms on vector spaces, generalizing the standard dot product. For vectors \mathbf{x}, \mathbf{y} \in \mathbb{R}^n and an n \times n matrix A, the expression \mathbf{x}^T A \mathbf{y} defines a bilinear form, which is linear in both arguments: \mathbf{x}^T A \mathbf{y} = \sum_{i=1}^n \sum_{j=1}^n x_i a_{ij} y_j. This form is symmetric if A = A^T, meaning \mathbf{x}^T A \mathbf{y} = \mathbf{y}^T A \mathbf{x} for all \mathbf{x}, \mathbf{y}. Over complex vector spaces, the analogous structure is a sesquilinear form, defined as \mathbf{x}^\dagger A \mathbf{y}, where \dagger denotes the conjugate transpose (also called the Hermitian adjoint). This form is linear in \mathbf{y} and conjugate-linear in \mathbf{x}: \mathbf{x}^\dagger A \mathbf{y} = \sum_{i=1}^n \sum_{j=1}^n \overline{x_i} a_{ij} y_j, with the overline indicating complex conjugation. If A is Hermitian (A = A^\dagger), the form satisfies \mathbf{x}^\dagger A \mathbf{y} = \overline{\mathbf{y}^\dagger A \mathbf{x}}. Such forms connect directly to inner products when A is Hermitian and positive definite, meaning \mathbf{x}^\dagger A \mathbf{x} > 0 for all nonzero \mathbf{x} \in \mathbb{C}^n. In this case, \langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^\dagger A \mathbf{y} defines a valid inner product on the space, inducing a norm and geometry via the associated positive definiteness. A special case is the quadratic form \mathbf{x}^T A \mathbf{x}, which arises when \mathbf{y} = \mathbf{x} in the real bilinear case (or analogously in the complex sesquilinear setting). In physics, quadratic forms model potential energy in mechanical systems; for instance, the Lagrangian for small oscillations expresses kinetic and potential energies as quadratic forms in coordinates and velocities, enabling analysis via normal modes.

Economic Models

In economic modeling, matrix multiplication is fundamental to the Leontief input-output model, which quantifies intersectoral dependencies in production and resource allocation across an economy. The model uses an input coefficient matrix A, where each entry a_{ij} represents the amount of output from sector i required as input to produce one unit of output from sector j. The total output vector \mathbf{x} is given by the equation \mathbf{x} = A \mathbf{x} + \mathbf{d}, where \mathbf{d} is the vector of final demand for goods and services outside the production process. Rearranging yields (I - A) \mathbf{x} = \mathbf{d}, solved as \mathbf{x} = (I - A)^{-1} \mathbf{d}; here, the Leontief inverse matrix (I - A)^{-1} is computed via matrix operations, with its entries capturing the total (direct and indirect) output multipliers needed to satisfy the demand through successive rounds of production. This approach enables economists to compute the total production required in each sector to meet exogenous demand, accounting for intermediate inputs via matrix multiplication. For instance, the (i,j)-th entry of (I - A)^{-1} indicates the total output from sector i needed per unit of final demand from sector j, highlighting ripple effects in resource allocation. The model was developed by Wassily Leontief in his seminal 1936 paper, "Quantitative Input and Output Relations in the Economic Systems of the United States," which applied it to analyze the U.S. economy divided into 44 industries. Leontief received the Nobel Prize in Economic Sciences in 1973 for pioneering this input-output method and its contributions to understanding economic structures. To illustrate, consider a two-sector economy with agriculture (sector 1) and manufacturing (sector 2), and input coefficient matrix A = \begin{pmatrix} 0.2 & 0.3 \\ 0.1 & 0.4 \end{pmatrix}, where 0.2 units of agricultural output are needed per unit of agricultural production, 0.3 units per unit of manufacturing, and so on. Suppose the final demand is \mathbf{d} = \begin{pmatrix} 100 \\ 80 \end{pmatrix} (in appropriate units). Then, I - A = \begin{pmatrix} 0.8 & -0.3 \\ -0.1 & 0.6 \end{pmatrix}, with determinant $0.8 \times 0.6 - (-0.3) \times (-0.1) = 0.45. The inverse is (I - A)^{-1} = \frac{1}{0.45} \begin{pmatrix} 0.6 & 0.3 \\ 0.1 & 0.8 \end{pmatrix} = \begin{pmatrix} 1.\overline{3} & 0.6\overline{6} \\ 0.2\overline{2} & 1.7\overline{7} \end{pmatrix}. The required production vector is \mathbf{x} = (I - A)^{-1} \mathbf{d} = \begin{pmatrix} 1.\overline{3} \times 100 + 0.6\overline{6} \times 80 \\ 0.2\overline{2} \times 100 + 1.7\overline{7} \times 80 \end{pmatrix} \approx \begin{pmatrix} 186.7 \\ 164.4 \end{pmatrix}, indicating the total output needed from each sector to meet the demand after all intermediate uses. Despite its influence, the Leontief model has key limitations: it assumes fixed technical coefficients with no substitution between inputs, constant returns to scale regardless of production levels, and a static framework without time dynamics or capacity constraints. These assumptions simplify resource allocation analysis but restrict its applicability to more flexible or evolving economies.

Advanced Topics

Matrix Powers

In linear algebra, the k-th power of a square matrix A, denoted A^k where k is a non-negative integer, is defined as the result of multiplying A by itself k times. Specifically, A^k = A \cdot A \cdot \ldots \cdot A (k factors), with the base cases A^1 = A and A^0 = I, where I is the n \times n identity matrix if A is n \times n. This definition extends the familiar notion of scalar exponentiation to matrices, preserving many algebraic properties under matrix multiplication. Computing higher powers directly by repeated multiplication can be inefficient for large k, as it requires O(k) multiplications. However, the associativity of matrix multiplication allows for different parenthesizations of the product, enabling more efficient algorithms such as exponentiation by squaring, which reduces the number of multiplications to O(\log k). For instance, A^4 = ((A^2) \cdot (A^2)), where A^2 = A \cdot A, computes the power using only three multiplications instead of four. This leverages the associative property without altering the result. A notable application of matrix powers arises in generating sequences like the Fibonacci numbers. Consider the matrix F = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}. Raising it to the n-th power yields F^n = \begin{pmatrix} F_{n+1} & F_n \\ F_n & F_{n-1} \end{pmatrix}, where F_n is the n-th Fibonacci number (with F_1 = 1, F_2 = 1). This connection allows efficient computation of Fibonacci terms via matrix exponentiation, demonstrating the utility of powers in combinatorial problems. For diagonalizable matrices, powers can be simplified significantly. If A = P D P^{-1} where D is diagonal, then A^k = P D^k P^{-1}, and D^k is obtained by raising each diagonal entry of D to the k-th power, which is computationally straightforward. This method avoids repeated full matrix multiplications and is particularly useful for large exponents or iterative applications.

Abstract Algebra Perspective

In abstract algebra, the set of all n \times n matrices over a ring R, denoted M_n(R), forms a semigroup under matrix multiplication, as the operation is closed and associative for all elements in the set. This structure lacks a multiplicative identity unless the identity matrix I_n is explicitly included, in which case it becomes a monoid with I_n serving as the two-sided identity element. When R is a field F, M_n(F) equipped with both matrix addition and multiplication constitutes a ring, known as the matrix ring over F. This ring is non-commutative for n \geq 2, since there exist matrices A, B \in M_n(F) such that AB \neq BA; for instance, the standard basis matrices E_{12} and E_{21} satisfy E_{12} E_{21} = E_{11} while E_{21} E_{12} = E_{22}. The ring operations satisfy distributivity, with addition forming an abelian group under the zero matrix and multiplication preserving the associative property from the semigroup structure. The units in the ring M_n(F)—that is, the elements with multiplicative inverses—are precisely the invertible matrices, which form the general linear group \mathrm{GL}(n, F) under matrix multiplication. This group consists of all n \times n matrices over F with non-zero determinant, and it is non-abelian for n \geq 2, reflecting the underlying non-commutativity of the ring. The algebraic framework for matrices as structured objects with addition and non-commutative multiplication was established by Arthur Cayley in his 1858 memoir, where he treated matrices as "single quantities" capable of being added, multiplied, and subjected to powers and other operations, laying the groundwork for modern matrix algebra and its extensions to continuous groups.

Computational Complexity

The standard algorithm for multiplying two n \times n matrices over a field requires \Theta(n^3) scalar multiplications and additions, arising from three nested loops that compute each entry of the product matrix as a sum of n terms. In 1969, Volker Strassen developed the first sub-cubic algorithm using a recursive divide-and-conquer approach on $2 \times 2 blocks, reducing the number of multiplications from 8 to 7 and yielding an asymptotic complexity of O(n^{\log_2 7}) \approx O(n^{2.807}). This method partitions larger matrices into quadrants, performs seven recursive multiplications on submatrices, and combines results with additions and subtractions, with the exponent \log_2 7 derived from the recurrence relation for the recursion depth. Building on Strassen's idea, Don Coppersmith and Shmuel Winograd introduced a family of algorithms in 1990 that further lowered the exponent to approximately 2.376 through sophisticated block decompositions and asymptotic analysis of rectangular matrix multiplications. Subsequent refinements, including laser methods that eliminate inefficiencies in prior constructions, have improved the best known upper bound to O(n^{2.371339}) as of 2024. These advances rely on algebraic techniques to find minimal tensor decompositions for matrix multiplication, but they introduce large constant factors and recursive overhead, limiting their practicality to extremely large n (often beyond $10^4), where optimized implementations of the naive or Strassen algorithms remain faster in real-world computing environments. No algorithm achieving O(n^2) complexity is known, as this would imply linear time relative to the \Theta(n^2) input size and contradict established lower bounds of \Omega(n^2) in standard models; moreover, under the 3SUM conjecture—which posits that finding three elements summing to zero in an array of n integers requires \Omega(n^2) time—the matrix multiplication exponent \omega must exceed 2, since faster matrix multiplication would yield subquadratic solutions to 3SUM via reductions involving triangular detection or convolution.

Generalizations

Block Matrices

Block matrices, also known as partitioned matrices, divide a larger matrix into smaller submatrices or blocks, facilitating structured computations especially for large-scale problems. Suppose matrix A is an m \times n matrix partitioned into blocks as A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}, where A_{11} is p \times q, A_{12} is p \times (n-q), A_{21} is (m-p) \times q, and A_{22} is (m-p) \times (n-q). Similarly, matrix B is an n \times r matrix partitioned compatibly as B = \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix}, ensuring the column dimensions of corresponding blocks in A match the row dimensions in B. The product C = AB can then be computed as a block matrix C = \begin{pmatrix} C_{11} & C_{12} \\ C_{21} & C_{22} \end{pmatrix}, where each block C_{ij} is obtained by standard matrix multiplication of the relevant blocks from A and B. The block multiplication formula mirrors the scalar case, leveraging distributivity over addition. Specifically, \begin{pmatrix} C_{11} & C_{12} \\ C_{21} & C_{22} \end{pmatrix} = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix} = \begin{pmatrix} A_{11}B_{11} + A_{12}B_{21} & A_{11}B_{12} + A_{12}B_{22} \\ A_{21}B_{11} + A_{22}B_{21} & A_{21}B_{12} + A_{22}B_{22} \end{pmatrix}, provided the block dimensions are compatible for multiplication. This approach treats blocks as "entries" in a larger scalar-like multiplication, preserving the associative and distributive properties of matrix operations. For illustration, consider two 4×4 matrices partitioned into 2×2 blocks. Let A = \begin{pmatrix} 1 & 2 & 0 & 0 \\ 3 & 4 & 0 & 0 \\ 0 & 0 & 5 & 6 \\ 0 & 0 & 7 & 8 \end{pmatrix} = \begin{pmatrix} \begin{matrix} 1 & 2 \\ 3 & 4 \end{matrix} & \begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix} \\ \begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix} & \begin{matrix} 5 & 6 \\ 7 & 8 \end{matrix} \end{pmatrix}, and B = \begin{pmatrix} 9 & 10 & 0 & 0 \\ 11 & 12 & 0 & 0 \\ 0 & 0 & 13 & 14 \\ 0 & 0 & 15 & 16 \end{pmatrix} = \begin{pmatrix} \begin{matrix} 9 & 10 \\ 11 & 12 \end{matrix} & \begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix} \\ \begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix} & \begin{matrix} 13 & 14 \\ 15 & 16 \end{matrix} \end{pmatrix}. The top-left block of the product C_{11} = A_{11}B_{11} + A_{12}B_{21} = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \begin{pmatrix} 9 & 10 \\ 11 & 12 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 31 & 34 \\ 70 & 76 \end{pmatrix}. Other blocks follow similarly, yielding the full product without computing the entire matrix element-wise. This block structure is particularly advantageous for large-scale computations, as it enables parallelization by distributing block multiplications across processors or cores, improving efficiency in numerical software. For instance, MATLAB's matrix multiplication routines, built on optimized BLAS and LAPACK libraries, employ block algorithms to enhance cache utilization and support multi-threaded execution for faster performance on modern hardware.

Tensor and Kronecker Products

The Kronecker product provides a fundamental generalization of matrix multiplication to higher-dimensional structures, effectively constructing a larger matrix from two smaller ones in a way that preserves linear algebraic operations on tensor product spaces. For an m \times n matrix A = (a_{ij}) and a p \times q matrix B, the Kronecker product A \otimes B is defined as the mp \times nq block matrix whose (i,j)-th block is the scalar a_{ij} times B, i.e., A \otimes B = \begin{pmatrix} a_{11} B & a_{12} B & \cdots & a_{1n} B \\ a_{21} B & a_{22} B & \cdots & a_{2n} B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} B & a_{m2} B & \cdots & a_{mn} B \end{pmatrix}. This operation, named after the German mathematician Leopold Kronecker though first described by Johann Georg Zehfuss in 1858, enables the representation of multilinear maps and transformations on product vector spaces. A distinctive feature of the Kronecker product is its compatibility with standard matrix multiplication through the mixed-product property: if A is m \times r, C is r \times n, B is p \times s, and D is s \times q, then (A \otimes B)(C \otimes D) = (AC) \otimes (BD). This property, which holds whenever the dimensions allow the respective products to be defined, facilitates efficient computation of products in structured matrices and underpins algorithms for solving systems involving tensor-structured data. In multilinear algebra, the Kronecker product is essential for vectorizing tensors and analyzing multi-way interactions, such as in the decomposition of higher-order arrays into matrix factors while maintaining the underlying multilinear structure. In quantum information theory, it constructs the operator space for composite quantum systems; for instance, the two-qubit Hilbert space, where Bell states like \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle) reside, is represented using Kronecker products of single-qubit bases to model entanglement. As an alternative generalization, the Hadamard product (or Schur product) performs element-wise multiplication on matrices of the same dimensions, yielding (A \circ B)_{ij} = a_{ij} b_{ij}. While associative and commutative like standard matrix multiplication, the Hadamard product lacks a straightforward mixed-product rule with ordinary multiplication—e.g., (A \circ B)C \neq (AC) \circ (BC) in general—making it less suitable for compositional extensions of matrix multiplication.

References

  1. [1]
    Matrix Multiplication -- from Wolfram MathWorld
    The product C of two matrices A and B is defined as c_(ik)=a_(ij)b_(jk), where j is summed over for all possible values of i and k.
  2. [2]
    [PDF] 1.4 Matrix Multiplication AB and CR - MIT Mathematics
    To multiply AB, take the dot product of each row of A with each column of B. When A has 2 rows and B has 2 columns, that means 4 dot products. Page 2 ...
  3. [3]
    [PDF] Properties of matrix operations
    Matrix multiplication: if A is a matrix of size m × n and B is a matrix of size n × p, then the product AB is a matrix of size m × p. • Vectors: a vector of ...
  4. [4]
    [PDF] 1 Matrix multiplication: Strassen's algorithm - Stanford University
    Apr 4, 2016 · We've all learned the naive way to perform matrix multiplies in O(n3) time.1 In today's lecture, we review Strassen's sequential algorithm ...
  5. [5]
    [PDF] A Brief History of Linear Algebra - University of Utah Math Dept.
    The other part, matrix multiplication or matrix algebra came from the work of Arthur Cayley in 1855.
  6. [6]
    Explained: Matrices | MIT News
    Dec 6, 2013 · Matrices arose originally as a way to describe systems of linear equations, a type of problem familiar to anyone who took grade-school algebra.Missing: history | Show results with:history
  7. [7]
    Matrix Algebra - Computer Science
    Matrix multiplication is a mainstay of computing. Thousands of applications rely heavily on matrix multiplication. Some examples include: Computer graphics and ...
  8. [8]
    [PDF] Lecture 11: Neural Networks and Matrix Multiply. - CS@Cornell
    That is, the bottleneck for deep neural networks is matrix multiply. As a result, any good deep learning system must involve efficient matrix multiplication. • ...
  9. [9]
    [PDF] CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix ...
    Feb 26, 2015 · A brief history of (Dense) Linear Algebra software (2/7). • But the BLAS-1 weren't enough. • Consider AXPY ( y = α·x + y ): 2n flops on 3n ...
  10. [10]
    [PDF] A Primer on Matrices - EE263
    Sep 17, 2012 · These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of ...
  11. [11]
    [PDF] Matrix notation and multiplication 1 Matrices - Columbia CS
    Much of linear algebra is about developing useful “algebraic” notations for expressing the central concepts of the subject. These notations will help.
  12. [12]
    Matrix Multiplication
    Let T : R n → R m and U : R p → R n be linear transformations, and let A and B be their standard matrices, respectively, so A is an m × n matrix and B is an n × ...
  13. [13]
    Matrices and determinants - MacTutor History of Mathematics
    The beginnings of matrices and determinants goes back to the second century BC although traces can be seen back to the fourth century BC.
  14. [14]
    Matrix Multiplication - A First Course in Linear Algebra
    We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations.
  15. [15]
    Multiplying matrices and vectors - Math Insight
    To define multiplication between a matrix A and a vector x (i.e., the matrix-vector product), we need to view the vector as a column matrix. We define the ...
  16. [16]
    [PDF] 2.2 Matrix-Vector Multiplication
    48. Matrix Algebra. Matrix-Vector Multiplication. Given a system of linear equations, the left sides of the equations depend only on the coefficient matrix A. ...
  17. [17]
    [PDF] Matrix-Vector Multiplication - Trinity University
    If we multiply an m × n matrix by a vector in Rn, the result is a vector in Rm. The linear system with augmented matrix (A b) can now be compactly represented ...
  18. [18]
    [PDF] Linear Algebra Review and Reference
    Oct 7, 2008 · There are many ways of looking at matrix multiplication, and we'll start by examining a few special cases. 3. Page 4. 2.1 Vector-Vector Products.
  19. [19]
    [PDF] Numerical Matrix Analysis - Ilse Ipsen
    Row Vector Times Matrix. The product of a row vector times a matrix is a row vector. There are again two ways to think about this operation. Let A ∈ Cm×n.
  20. [20]
    [PDF] From Matrix-Vector Multiplication to Matrix-Matrix Multiplication
    Row vector times matrix multi- ply. Page 42. Week 4. From Matrix-Vector Multiplication to Matrix-Matrix Multiplication. 158. LAFF routines. Operation. Abbrev ...
  21. [21]
    Dot product in matrix notation - Math Insight
    If we multiply xT (a 1×n matrix) with any n-dimensional vector y (viewed as an n×1 matrix), we end up with a matrix multiplication equivalent to the familiar ...
  22. [22]
    Basics of Matrix Arithmetic - WPI
    You need to be able to do 3 things which are illustrated below: 1) row times column = number (same as dot product) 1xn * nx1 = 1x1 or a single number.
  23. [23]
    Dot Products and Orthogonality
    The basic construction in this section is the dot product, which measures angles between vectors and computes the length of a vector. Definition. The dot ...
  24. [24]
    The dot product - Math Insight
    This dot product a⋅b should depend on the magnitude of both vectors, ∥a∥ and ∥b∥, and be symmetric in those vectors. Hence, we don't want to define a⋅b to be ...
  25. [25]
    6.1 The dot product - Understanding Linear Algebra
    The dot product of two vectors v and w satisfies these properties: v ⋅ v = | v | 2 v ⋅ w = | v | | w | cos ⁡ θ · The vectors v and w are orthogonal when . v ⋅ w ...
  26. [26]
    Inner Product -- from Wolfram MathWorld
    An inner product is a generalization of the dot product. In a vector space, it is a way to multiply vectors together, with the result of this multiplication ...
  27. [27]
    [PDF] 2D Geometric Transformations
    • 2x2 matrices have simple geometric interpretations. – uniform scale. – non-uniform scale. – rotation. – shear. – reflection. • Reading off the matrix. 8. Page ...
  28. [28]
    Geometry of Linear Transformations – Calculus Tutorials
    Shears. The standard matrix 𝐴 = [ 1 𝑘 0 1 ] taking vectors [ 𝑥 𝑦 ] to [ 𝑥 + 𝑘 ⁢ 𝑦 𝑦 ] is called a shear in the 𝑥 -direction. Similarly, 𝐴 = [ 1 0 𝑘 1 ] takes ...
  29. [29]
    [PDF] 7. Linear Transformations - UC Davis Mathematics
    In fact, matrix multiplication on vectors is a linear transformation. Example Let V be the vector space of polynomials of finite degree with standard addition ...
  30. [30]
    Matrix Multiplication
    Matrix Multiplication. Consider the product of a 2×3 matrix and a 3×4 matrix. The multiplication is defined because the inner dimensions (3) are the same.Missing: 3x2 | Show results with:3x2
  31. [31]
    Multiplying matrices (article) - Khan Academy
    We are now ready to look at an example of matrix multiplication. Given A = [ 1 7 2 4 ] ‍ and B = [ 3 3 5 2 ] ‍ , let's find matrix C = A B ‍ . To help our ...
  32. [32]
    [PDF] LINEAR ALGEBRA Contents 1. Introduction to Matrices 2 2 ...
    We now give an example of the proof of a matrix equation. We will prove one of the distributive properties of matrix multiplication. Statement: A(B + C) ...
  33. [33]
    Properties of Matrix Arithmetic
    (b) ( Distributivity of Multiplication over Addition) If A, B, C, D, E, and F are matrices compatible for addition and multiplication, then. $$A(B + C) = A B ...
  34. [34]
    [PDF] Properties of Matrix Arithmetic
    (c) If j and k are numbers and A and B are matrices which are compatible for multiplication, then k(AB)=(kA)B = A(kB) and (jk)A = j(kA). (d) (Identity for ...
  35. [35]
    160 Linear Systems: Matrix Algebra
    However, note that a column vector C can be multiplied on the right by a 1-by-1 matrix [k], C[k], and a row vector R can be multiplied on the left, [k]R. The ...
  36. [36]
    [PDF] 2.2 Addition and Subtraction of Matrices and Multiplication of a ...
    Feb 16, 2007 · Consequently,. (A + B)C = AC + BC. Theorem 2.2.17 states that matrix multiplication is associative and distributive (over addition). We now ...
  37. [37]
    examples of non-commutative operations - PlanetMath
    Mar 22, 2013 · A standard example of a non-commutative operation is matrix multiplication Mathworld Planetmath. Consider the following two integer matrices.
  38. [38]
    Matrix Multiplication
    Example(Non-commutative composition of transformations). Subsection4.4.2Matrix multiplication. In this subsection, we introduce a seemingly unrelated ...<|control11|><|separator|>
  39. [39]
    [PDF] Exercises and Problems in Linear Algebra (John M Erdman)
    This collection covers matrices, linear equations, vector spaces, linear maps, spectral theory, inner product spaces, and adjoint operators.<|control11|><|separator|>
  40. [40]
    Über quantentheoretische Umdeutung kinematischer und ...
    Cite this article. Heisenberg, W. Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen.. Z. Physik 33, 879–893 (1925). https://doi ...Missing: url | Show results with:url
  41. [41]
    [PDF] Linear Algebra - UC Davis Mathematics
    In broad terms, vectors are things you can add and linear functions are functions of vectors that respect vector addition. The goal of this text is to.
  42. [42]
    [PDF] CMSC 451: Lecture 10 Dynamic Programming: Chain Matrix ...
    Matrix multiplication is an associative but not a commutative operation. ... Root multiplication cost = pi−1pkpj pi−1 × pk pk × pj pi−1 × pj. Fig. 4 ...
  43. [43]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    uses linear algebra to derive this formula. Linear Algebra Done Right ... Matrix 69. Addition and Scalar Multiplication of Matrices 71. Matrix ...<|control11|><|separator|>
  44. [44]
    [PDF] 8.7 Complex Matrices
    Jul 8, 2020 · If A is an n×n matrix, the characteristic polynomial cA(x) is a polynomial of degree n and the eigenvalues of A are just the roots of cA(x).
  45. [45]
    [PDF] 18.369 Problem Set 1 Solutions
    (a) If † is conjugate-transpose of a matrix or vector, we are just using the usual linear-algebra rule that (AB)† = B†A†, hence hh, Oh0i = h†(Oh0) = (O†h) ...
  46. [46]
    [PDF] Lecture 4 - Notes 8.370/18.435 Fall 2022
    The The Hermitian transpose is the 3 Page 4 conjugate transpose: you take the transpose of a matrix and you take the complex conjugates of each of its elements.
  47. [47]
    [PDF] Bilinear Forms over a field F Let V be a vector space.
    The formula B(x, y) = xtAy follows from bilinearity of B. Change of basis. A change of basis for V is carried out by an invertible matrix P. Writing x = Px0,y = ...
  48. [48]
    [PDF] Lecture Notes Math 371: Algebra (Fall 2006)
    Sep 19, 2006 · A bilinear form is symmetric if and only if the matrix associated to it is symmetric. Proof. Symmetry is equivalent to. (X, Y ) = XtAY = Y tAX = ...
  49. [49]
    [PDF] Hilbert and Sobolev spaces
    A sesquilinear form is a map f : E × E → K that is linear on the first argument and antilinear on the second: ∀λ ∈ K ,∀(x, y, z) ∈ E3 , f(λx + y, z) ...
  50. [50]
    [PDF] Inner Product Spaces and Orthogonality - HKUST Math Department
    The vector space Rn with this special inner product (dot product) is called the Euclidean n-space, and the dot product is called the standard inner product on ...<|control11|><|separator|>
  51. [51]
    19: The Principle of Least Action - Feynman Lectures
    Then he said this: If you calculate the kinetic energy at every moment on the path, take away the potential energy ... The most general quadratic form that ...
  52. [52]
    [PDF] Leontief Input-Output Models - UMD MATH
    The Leontief Input-Output Model is given by: ¯p = M ¯p+ ¯d. Definition ... of a given Product becomes a column (rather than a row) of the matrix. This.Missing: 1941 | Show results with:1941
  53. [53]
    Wassily Leontief – Facts - NobelPrize.org
    Wassily Wassilyevich Leontief Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1973. Born: 5 August 1906, St. Petersburg, Russia.
  54. [54]
    Assumptions of I-O - IMPLAN - Support
    Sep 20, 2023 · Input-Output models hold the following nine assumptions: CONSTANT RETURNS TO SCALE The same quantity of inputs is needed per unit of Output, regardless of the ...
  55. [55]
    [PDF] Matrix Algebra - Trinity University
    Matrix multiplication is associative. If we define. A0 = I, then matrix powers obey the usual laws of exponents (as long as the exponents ...
  56. [56]
    [PDF] FIBONACCI NUMBERS: AN APPLICATION OF LINEAR ALGEBRA 1 ...
    The numbers which show up on the diagonal of S−1AS are the eigenvalues of A. For a diagonal matrix, it is very easy to calculate its powers. The sequence of ...
  57. [57]
    Fibonacci Matrices
    We will learn later how to find any power of a matrix avoiding tedious job. Fp(1)=Fp(2)=⋯=Fp(p+1). F p ( 1 ) = F p ( 2 ) = ⋯ = F p ( p + 1 ) .
  58. [58]
    Diagonalization
    Recipe: Compute powers of a diagonalizable matrix​​ If A = CDC − 1 , where D is a diagonal matrix, then A n = CD n C − 1 : A = C C x 00 0 y 0 00 z D C − 1 = ⇒ A ...
  59. [59]
    4.3 Diagonalization, similarity, and powers of a matrix
    If a matrix A is diagonalizable, writing A = P D P − 1 can help us understand powers of A more easily.. Activity 4.3.4. Let's begin with the ...
  60. [60]
    MATRIX SEMIGROUPS - American Mathematical Society
    Abstract. Let S be a semigroup of matrices over a field such that a power of each element lies in a subgroup (i.e., each element has a Drazin inverse within ...
  61. [61]
    [1907.12518] Matrix semigroups over semirings - arXiv
    Jul 29, 2019 · This paper considers semigroups of the form M_n(S), where S is a semiring, and the subsemigroups UT_n(S) and U_n(S) of M_n(S) consisting of ...
  62. [62]
    [PDF] Matrix ring
    Nov 19, 2012 · In abstract algebra, a matrix ring is any collection of matrices forming a ring under matrix addition and matrix multiplication.
  63. [63]
    [PDF] Matrix groups
    A matrix group, or linear group, is a group G whose elements are invertible n×n matrices over a field F. The general linear group GL(n,F) is the group.
  64. [64]
    II. A memoir on the theory of matrices - Journals
    It will be seen that matrices (attending only to those of the same order) comport themselves as single quantities; they may be added, multiplied or compounded ...
  65. [65]
  66. [66]
    New Bounds for Matrix Multiplication: from Alpha to Omega - arXiv
    Jul 16, 2023 · The main contribution of this paper is a new improved variant of the laser method for designing matrix multiplication algorithms.
  67. [67]
    [PDF] 3SUM, 3XOR, Triangles - Khoury College of Computer Sciences
    Sep 24, 2014 · In particular, this shows that solving either 3SUM or 3XOR in time O(n2ω/(ω+1)−Ω(1)), where ω is the exponent of matrix multiplication, would ...
  68. [68]
    [PDF] 2.3 Matrix Multiplication
    This means that the number of columns in each block of A must equal the number of rows in the corresponding block of B. Theorem 2.3.4: Block Multiplication. If ...
  69. [69]
    MAT-0023: Block Matrix Multiplication - Ximera
    If matrices and are partitioned compatibly into blocks, the product can be computed by matrix multiplication using blocks as entries. We omit the proof.
  70. [70]
    [PDF] 9. Properties of Matrices Block Matrices - UC Davis Mathematics
    There are many ways to cut up an n × n matrix into blocks. Often context or the entries of the matrix will suggest a useful way to divide.
  71. [71]
    [PDF] Block matrices in linear algebra - Stephan Ramon Garcia
    Multiplication of larger block matrices is conducted in an analogous manner. Example 28. Here is a numerical example of block matrix multiplication. We use.
  72. [72]
    MATLAB Incorporates LAPACK - MathWorks
    But matrix multiplication is the most important routine in the Level 3 BLAS because it is the heart of the more complicated block algorithms in LAPACK itself.
  73. [73]
  74. [74]
    [PDF] On the Kronecker Product
    Aug 1, 2013 · The Kronecker product has a lot of interesting properties, many of them are stated and proven in the basic literature about matrix analysis ( ...
  75. [75]
    [PDF] Notes on Kronecker Products - Johns Hopkins University
    Mar 22, 2020 · This note is a brief description of the matrix Kronecker product and matrix stack algebraic operators. ... 3 Properties of the Kronecker Product ...
  76. [76]
    Properties of the Kronecker product - StatLect
    Any Kronecker product that involves a zero matrix (ie, a matrix whose entries are all zeros) gives a zero matrix as a result.Preliminaries · Distributive property · Multiplication by a scalar · Mixed products
  77. [77]
    The ubiquitous Kronecker product - ScienceDirect.com
    The Kronecker product has a rich and very pleasing algebra that supports a wide range of fast, elegant, and practical algorithms.
  78. [78]
    Vectors & Matrices in Quantum Computing - Microsoft Learn
    Jan 16, 2025 · This article introduces the basic concepts of linear algebra and how to work with vectors and matrices in quantum computing.Vectors · Scalar productMissing: multilinear | Show results with:multilinear
  79. [79]
    Connection between the Hadamard and matrix products with an ...
    The Hadamard product is a type of matrix multiplication that is commutative and simpler than the usual product; see [23]. The first ingredient of the present ...