Fact-checked by Grok 2 weeks ago

Gaussian elimination

Gaussian elimination is a systematic for solving systems of linear equations, determining the of a , computing matrix inverses, and finding matrix determinants by transforming an into using elementary row operations—such as swapping rows, multiplying a row by a nonzero scalar, and adding a multiple of one row to another—followed by back-substitution to obtain the solution. The method reduces the original system to an equivalent upper triangular form, where the variables can be solved sequentially from the bottom up, ensuring when partial pivoting is employed to select the largest and avoid division by small numbers. Although commonly attributed to , who formalized its use in the early 19th century for solving problems in astronomy, the technique traces its origins to ancient in the text Nine Chapters on the Mathematical Art around 200 BCE, where a similar row reduction method was applied to linear systems. The algorithm gained prominence in the West through contributions from figures like in the 17th century and was extended by Wilhelm Jordan in 1888 into the Gauss-Jordan elimination variant, which fully reduces the matrix to reduced for direct solution without back-substitution. In modern computational contexts, Gaussian elimination underpins libraries and is fundamental to fields like engineering, physics, and , though it exhibits cubic O(n³) for an n×n system, prompting optimizations such as for repeated solves. Its reliability and simplicity make it a cornerstone of introductory linear , with pivoting strategies enhancing accuracy in implementations.

Fundamentals

Elementary row operations

Elementary row operations are the fundamental manipulations applied to matrices during Gaussian elimination to simplify systems of linear equations without altering their solutions. There are three types of these operations: (1) interchanging any two rows of the matrix, (2) multiplying all entries in a single row by a nonzero scalar c, and (3) adding a scalar multiple c of the entries in one row to the corresponding entries in another row. These operations are defined for an m \times n matrix A = (a_{ik}) as follows: for interchanging rows i and j, swap a_{ik} with a_{jk} for all k = 1 to n; for multiplying row i by c \neq 0, replace a_{ik} \leftarrow c a_{ik} for k = 1 to n; for adding c times row j to row i, replace a_{ik} \leftarrow a_{ik} + c a_{jk} for k = 1 to n. When applied to the augmented matrix [A \mid b] of a linear system Ax = b, these operations generate an equivalent system that preserves the original solution set. Interchanging rows merely reorders the equations, which does not affect the solutions; multiplying a row by a nonzero scalar c scales the equation by c on both sides, maintaining equality for any solution; adding a multiple of one row to another performs a valid algebraic manipulation that keeps the solution set unchanged, as the new equations are linear combinations of the originals. Each elementary row operation is reversible through another elementary operation, ensuring that the transformations are invertible. Specifically, interchanging rows i and j is its own ; multiplying row i by c \neq 0 is inverted by multiplying by c^{-1}; and adding c times row j to row i is inverted by adding -c times row j to row i. These operations correspond to left-multiplication by elementary matrices, which are obtained by applying the same operation to the I_m; for instance, the matrix E for adding c times row j to row i has $1s on the diagonal, cin the(i,j)-entry, and $0s elsewhere, and EA yields the modified matrix. Since each elementary matrix E is invertible with E^{-1} also elementary, a sequence of operations represented by U = E_k \cdots E_1 is invertible, confirming that the row-equivalent matrices share the same . The motivation for these operations lies in their ability to simplify matrices while preserving the row space, the of the row vectors, which remains under row interchanges, scalings, and additions. This preservation ensures that the linear dependencies among rows are maintained, allowing Gaussian elimination to reduce the matrix to without losing essential structural information.

Row echelon form

A matrix is in row echelon form (REF) if it satisfies three conditions: all nonzero rows are above any rows of all zeros; the leading entry () in each nonzero row is strictly to the right of the leading entry in the row immediately above it; and all entries below each leading entry are zeros. These forms are achieved through a sequence of elementary row operations. Consider the following 3×3 matrix, which is not in REF because the leading entry in the third row (5) appears in a column to the left of the leading entry in the second row (4): \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 4 \\ 0 & 5 & 0 \end{pmatrix} In contrast, the matrix below is in REF, with pivots at positions (1,1), (2,3), and a zero row at the bottom; note that the leading entries need not be 1 in REF: \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 4 \\ 0 & 0 & 0 \end{pmatrix} The reduced row echelon form (RREF) builds on by adding two further requirements: each leading entry is 1 (a of 1); and each leading 1 is the only nonzero entry in its column. A key property of RREF is its uniqueness: every is row equivalent to exactly one matrix in RREF, which uniquely represents its row . In (or RREF), the positions of the identify the basic variables in systems derived from the matrix, with the number of pivots equal to the of the row . The number of nonzero rows in an REF equals the row rank of the matrix, which is the maximum number of linearly independent rows and remains invariant under elementary row operations. This equivalence holds because the nonzero rows in REF are linearly independent, providing a basis for the row space.

The Algorithm

Forward elimination phase

The forward elimination phase of Gaussian elimination systematically transforms the of a into an upper triangular form by eliminating the entries below the using elementary row operations. This phase focuses on processing each column from top to bottom, ensuring that after completion, the matrix is in , which simplifies subsequent solving. The procedure begins by considering the augmented matrix [A | b], where A is the n \times n and b is the right-hand side . For each pivot position k from 1 to n-1:
  1. Identify the a_{kk}; if it is zero, perform partial pivoting by swapping row k with a row i > k that has a nonzero entry in column k to ensure a_{kk} \neq 0 and avoid .
  2. For each row i from k+1 to n, compute the multiplier m_{ik} = a_{ik} / a_{kk}.
  3. Subtract m_{ik} times row k from row i to zero out the entry a_{ik}, updating the remaining entries in row i and the corresponding right-hand side entry b_i accordingly.
This elimination step uses the formula for updating entries: for each column j from k to n, a_{ij} \leftarrow a_{ij} - m_{ik} \cdot a_{kj}, and similarly for the augmented column: b_i \leftarrow b_i - m_{ik} \cdot b_k. Partial pivoting, which selects the row with the largest in column k below the diagonal for swapping, enhances by minimizing error growth due to finite precision arithmetic. The following pseudocode outlines the forward elimination phase for an n \times n system, assuming partial pivoting is implemented separately if needed:
for k = 1 to n-1
    if a_{kk} == 0
        swap row k with a suitable row i > k where a_{ik} ≠ 0
    for i = k+1 to n
        m = a_{ik} / a_{kk}
        for j = k to n
            a_{ij} = a_{ij} - m * a_{kj}
        b_i = b_i - m * b_k
    end
end
By progressively zeroing out the subdiagonal elements in each column, this phase reduces the original system to an equivalent upper triangular system U \mathbf{x} = \mathbf{c}, where U has nonzero diagonals and zeros below. This triangular structure eliminates the need to solve for multiple variables simultaneously in lower rows, enabling efficient back-substitution to find the solution vector \mathbf{x}.

Back-substitution phase

The back-substitution phase follows the forward elimination phase, where the has been transformed into an upper triangular form with nonzero pivots on the diagonal for the leading coefficients. This phase solves for the unknowns by starting from the bottom row and working upwards, substituting previously computed values into earlier equations to isolate each variable sequentially. In a system of n equations with n unknowns, assuming the matrix is square and invertible after elimination, the procedure begins with the last equation, which simplifies to x_n = b_n' / a_{nn}', where b_n' and a_{nn}' are the updated right-hand side and pivot entry, respectively. For each preceding row i from n-1 down to 1, the i-th variable is solved as x_i = \frac{b_i' - \sum_{j=i+1}^n a_{ij}' x_j}{a_{ii}'}, where the sum subtracts the contributions of the already-determined variables x_{j} for j > i. This backward pass yields the unique solution vector x in O(n^2) operations. For underdetermined systems, where the number of equations m < n or rank deficiency occurs, back-substitution identifies free variables corresponding to non-pivot columns in the row echelon form. These free variables can be assigned arbitrary values (often set to zero to obtain a particular solution), while pivot variables are expressed in terms of the free ones through the substitution process. The general solution is then the particular solution plus a linear combination of basis vectors spanning the null space, determined by solving the homogeneous system with free variables as parameters. Unlike full reduction to reduced row echelon form (RREF), which eliminates entries above pivots for direct solution reading, back-substitution on the row echelon form suffices for extracting solutions from triangular systems and is computationally less intensive for this purpose.

Illustrative example

To illustrate the Gaussian elimination algorithm, consider the system of linear equations Ax = b, where A = \begin{bmatrix} 2 & 1 & -1 \\ -3 & -1 & 2 \\ -2 & 1 & 2 \end{bmatrix}, \quad b = \begin{bmatrix} 8 \\ -11 \\ -3 \end{bmatrix}. The goal is to solve for x = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}. Begin by forming the augmented matrix [A \mid b]: \begin{bmatrix} 2 & 1 & -1 & \mid & 8 \\ -3 & -1 & 2 & \mid & -11 \\ -2 & 1 & 2 & \mid & -3 \end{bmatrix}. In the forward elimination phase, apply elementary row operations to transform this into row echelon form (upper triangular). First, use the pivot in position (1,1), which is 2. Eliminate the entries below it in column 1. The multiplier for row 2 is -3/2, so add (3/2) times row 1 to row 2: \text{Row 2} \leftarrow \text{Row 2} + \frac{3}{2} \cdot \text{Row 1} \implies \begin{bmatrix} 0 & \frac{1}{2} & \frac{1}{2} & \mid & 1 \end{bmatrix}. The multiplier for row 3 is -2/2 = -1, so add row 1 to row 3: \text{Row 3} \leftarrow \text{Row 3} + \text{Row 1} \implies \begin{bmatrix} 0 & 2 & 1 & \mid & 5 \end{bmatrix}. The matrix is now \begin{bmatrix} 2 & 1 & -1 & \mid & 8 \\ 0 & \frac{1}{2} & \frac{1}{2} & \mid & 1 \\ 0 & 2 & 1 & \mid & 5 \end{bmatrix}. Next, use the pivot in position (2,2), which is $1/2. Eliminate the entry below it in column 2. The multiplier for row 3 is $2 / (1/2) = 4, so subtract 4 times row 2 from row 3: \text{Row 3} \leftarrow \text{Row 3} - 4 \cdot \text{Row 2} \implies \begin{bmatrix} 0 & 0 & -1 & \mid & 1 \end{bmatrix}. The matrix is now \begin{bmatrix} 2 & 1 & -1 & \mid & 8 \\ 0 & \frac{1}{2} & \frac{1}{2} & \mid & 1 \\ 0 & 0 & -1 & \mid & 1 \end{bmatrix}. For clarity in back-substitution, scale row 2 by 2 and row 3 by -1 (optional normalization steps): \text{Row 2} \leftarrow 2 \cdot \text{Row 2} \implies \begin{bmatrix} 0 & 1 & 1 & \mid & 2 \end{bmatrix}, \quad \text{Row 3} \leftarrow -1 \cdot \text{Row 3} \implies \begin{bmatrix} 0 & 0 & 1 & \mid & -1 \end{bmatrix}. The upper triangular augmented matrix is \begin{bmatrix} 2 & 1 & -1 & \mid & 8 \\ 0 & 1 & 1 & \mid & 2 \\ 0 & 0 & 1 & \mid & -1 \end{bmatrix}. In the back-substitution phase, solve from the bottom up. From row 3: x_3 = -1.
From row 2: x_2 + x_3 = 2 \implies x_2 - 1 = 2 \implies x_2 = 3.
From row 1: $2x_1 + x_2 - x_3 = 8 \implies 2x_1 + 3 - (-1) = 8 \implies 2x_1 + 4 = 8 \implies 2x_1 = 4 \implies x_1 = 2.
Thus, x = \begin{bmatrix} 2 \\ 3 \\ -1 \end{bmatrix}.
To verify, compute Ax: A x = \begin{bmatrix} 2 & 1 & -1 \\ -3 & -1 & 2 \\ -2 & 1 & 2 \end{bmatrix} \begin{bmatrix} 2 \\ 3 \\ -1 \end{bmatrix} = \begin{bmatrix} 4 + 3 + 1 \\ -6 - 3 - 2 \\ -4 + 3 - 2 \end{bmatrix} = \begin{bmatrix} 8 \\ -11 \\ -3 \end{bmatrix} = b.

Applications

Solving systems of linear equations

Gaussian elimination serves as a foundational method for solving systems of linear equations expressed as Ax = b, where A is an m \times n matrix, x is an n \times 1 vector of unknowns, and b is an m \times 1 vector. To apply the method, the system is represented by forming the augmented matrix [A \mid b], which combines the coefficient matrix A with the constant vector b as an additional column. The Gaussian elimination algorithm is then performed on this augmented matrix, utilizing elementary row operations to transform it into row echelon form (REF). The REF of the augmented matrix provides critical information for determining the nature of the solutions. The system is consistent if and only if there is no row of the form [0 \ 0 \ \cdots \ 0 \mid c] where c \neq 0, as such a row indicates an impossible equation $0 = c. For a consistent system, the solution is unique if A has full column rank, meaning there are n pivot positions in the REF, corresponding to no free variables. If the rank of A is less than n, the system has infinitely many solutions, with the number of free variables equal to n minus the rank; these free variables allow parametrization of the solution set. Otherwise, the system is inconsistent and has no solutions. When solutions exist and are not unique, the general solution takes the form x = x_p + \sum c_i v_i, where x_p is a particular solution obtained from assigning zero to the free variables in the REF and back-substituting, the v_i form a basis for the null space of A (derived from the free columns in the REF), and the c_i are arbitrary scalars. For an underdetermined system where m < n and the rank is m, the solution can be parametrized by n - m free variables; for instance, in a system with two equations and three unknowns yielding one free variable t, the solution might express two variables in terms of t, such as x_1 = 2 + t, x_2 = 1 - t, x_3 = t, illustrating the line of solutions in the solution space. Compared to alternative methods like Cramer's rule, which explicitly uses determinants to solve square systems and is practical only for small n (e.g., n \leq 3) due to its exponential computational cost, Gaussian elimination offers superior efficiency for larger systems through its O(n^3) time complexity, making it the preferred approach in practice.

Computing matrix inverses

Gaussian elimination can be adapted to compute the inverse of an invertible square matrix A \in \mathbb{R}^{n \times n} by forming the augmented matrix [A \mid I_n], where I_n is the n \times n identity matrix, and applying row operations to transform the left partition into the identity matrix; upon success, the right partition becomes A^{-1}. This process, often referred to as Gauss-Jordan elimination, extends the forward elimination and back-substitution phases of the standard algorithm to achieve reduced row echelon form (RREF) directly. The matrix A must be square and possess full rank, meaning it yields n nonzero pivots during elimination; if a zero pivot occurs after row swaps or if the process fails to produce the identity on the left, A is singular and has no inverse. The steps mirror the core Gaussian elimination algorithm but operate simultaneously on the n columns of the identity matrix as multiple right-hand sides. In the forward elimination phase, row operations are applied to eliminate entries below each pivot in the left partition, propagating the same operations to the right partition. This is followed by a backward elimination phase, where entries above each pivot are zeroed out, again updating the right partition accordingly, until the left side reaches RREF equivalent to I_n. From a formulaic perspective, the columns of A^{-1} are the unique solutions x_j to the systems A x_j = e_j for j = 1, \dots, n, where e_j denotes the j-th standard basis vector; solving these n systems concurrently via the augmented approach avoids redundant factorization. This method maintains the O(n^3) time complexity of standard Gaussian elimination for solving a single linear system, as the additional n columns introduce a constant factor increase in operations (approximately twice the flops for the elimination phases), but it efficiently computes the inverse by handling all right-hand sides in one pass rather than repeating the process n times separately. Space requirements are O(n^2) for the augmented matrix, comparable to storing A alone.

Calculating determinants

Gaussian elimination provides a method to compute the determinant of a square matrix by transforming it into an upper triangular form while tracking the effects of row operations on the determinant value. The determinant remains unchanged when adding or subtracting a multiple of one row to another row, as this operation preserves the volume spanned by the row vectors. Scaling a row by a nonzero scalar c multiplies the determinant by c. Swapping two rows changes the sign of the determinant. To compute the determinant using , perform the forward elimination phase to reduce the matrix A to an upper triangular matrix U, applying row swaps for pivoting as needed and avoiding unnecessary scalings where possible. The determinant of A is then given by \det(A) = (-1)^k \prod_{i=1}^n u_{ii}, where k is the number of row swaps and u_{ii} are the diagonal entries of U, assuming no row scalings; if scalings by factors c_1, \dots, c_m occur, multiply by the product \prod_j c_j. For singular matrices, a zero pivot encountered during elimination indicates that the matrix is singular and \det(A) = 0, as the process cannot proceed without a division by zero or a swap to a nonzero entry. This approach is more efficient than cofactor expansion for matrices larger than 3×3, requiring O(n^3) operations compared to the exponential time of expansion, though floating-point implementations require partial pivoting to mitigate numerical instability from element growth.

Determining rank and nullity

Gaussian elimination provides a systematic method to determine the rank of a matrix, defined as the number of nonzero rows in its row echelon form (REF), which equals the number of pivot positions after row reduction. To compute the rank, apply the Gaussian elimination algorithm to the matrix A (of size m \times n) to obtain its REF; the rank r is then the count of these pivots. The nullity of A, which is the dimension of its null space, follows directly from the rank-nullity theorem: nullity = n - r. This theorem states that for a linear transformation from \mathbb{R}^n to \mathbb{R}^m represented by A, the dimension of the domain equals the sum of the dimensions of the image (rank) and kernel (nullity). To find a basis for the column space of A, identify the pivot columns in the REF; the corresponding original columns of A form a basis, as row operations preserve linear dependence relations among columns. For the row space, the nonzero rows of the REF constitute a basis, since these rows are linearly independent and span the same space as the original rows. A basis for the null space is obtained by solving the homogeneous system A \mathbf{x} = \mathbf{0} using the REF, where the pivot variables are expressed in terms of the free (non-pivot) variables. For each free variable, set it to 1 while setting all other free variables to 0, then back-substitute to solve for the pivot variables; the resulting vectors form the basis. Consider a general m \times n matrix with rank r = 2, implying two pivot positions in the REF. The column space basis consists of the two original columns corresponding to these pivots. The null space basis has n - 2 vectors, each with 1 in one free variable position, 0 in other free positions, and values in pivot positions determined by . The row space basis comprises the two nonzero rows from the REF.

History

Origins with Carl Friedrich Gauss

Carl Friedrich Gauss developed the elimination method in the context of his astronomical research, particularly for computing planetary orbits using the method of least squares to handle observational errors. In 1809, he published Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientium, where he applied least squares to determine the orbit of the asteroid based on limited observations. This work addressed overdetermined systems of linear equations arising from astronomical data, minimizing the sum of squared residuals to find the best-fit parameters. Gauss provided the first explicit description of the systematic elimination procedure in this 1809 Latin publication, presenting it as a practical tool for solving equations derived from least-squares problems in observations. He described the process as "eliminationem vulgarem" (common elimination), applied to the coefficients of linear systems without symbolic manipulation of variables. This method transformed the augmented matrix through row operations to upper triangular form, enabling efficient solution of systems from multiple measurements. Although similar elimination techniques had been used in Europe earlier—for instance, by Isaac Newton in the 17th century for solving linear systems—Gauss's approach systematized the process independently of non-Western traditions. A key innovation in Gauss's method was the systematic forward elimination that avoided computing the full matrix inverse, instead focusing on progressive reduction to isolate variables sequentially, which was particularly suited to the positive-definite normal equations in orbital calculations. This reduced computational effort by approximately half compared to contemporaneous elimination techniques, as Gauss processed coefficients in a fixed alphabetic order (e.g., solving for x before y). The method was directly applied to planetary orbit determination, such as refining Ceres's elements from Heinrich Olbers's observations. In 1810, Gauss introduced an unnamed bracket notation, such as [xy] to denote sums like xy + x'y' + x''y'' + ..., which condensed the representation of quadratic forms and successive eliminations without expanding all terms. He later integrated the elimination more fully with his least-squares terminology in 1821–1823, referring to the resulting equations as "normal" equations and using the notation to facilitate handling the symmetric positive-definite matrices typical in least-squares astronomy. While similar elimination techniques for solving linear systems had been known in ancient China since around 200 BCE, as described in The Nine Chapters on the Mathematical Art using counting rods for rectangular arrays, and further refined in 17th-century Japan by Seki Takakazu with determinant concepts, there is no evidence of European awareness of these methods at the time. Gauss's formulation independently systematized the process for European mathematics, emphasizing its role in error minimization for scientific computation.

Developments in the 19th and 20th centuries

In the 19th century, Gaussian elimination became integrated into the developing field of linear algebra through the work of British mathematicians and , who applied elimination techniques to the computation of determinants and the study of linear transformations during the 1860s. included early formulations of resultant theory for polynomial systems, while to matrix representations, laying groundwork for modern algebraic structures. These advancements shifted the method from practical computation toward theoretical applications in invariant theory and bilinear forms. The term "Gaussian elimination" originated in the 19th century, likely coined by Friedrich Bessel in 1838; the complete row reduction variant—extending elimination to reduced row echelon form—was termed "Gauss-Jordan" after German geodesist Wilhelm Jordan, who described an improved, more stable version in his 1888 handbook Handbuch der Vermessungskunde. Jordan's adaptation emphasized pivoting to avoid numerical issues in applied computations like surveying. During the 20th century, Gaussian elimination gained deeper theoretical significance within the abstract framework of vector spaces, developed in the early 20th century and expanded in finite-dimensional linear algebra, where it computes matrix rank, nullity, and bases for subspaces. Numerical analysis advanced the method's reliability; in 1965, J. H. Wilkinson proved its backward stability under partial pivoting, establishing bounds on error growth for floating-point arithmetic. The algorithm appeared prominently in mid-century textbooks, including C. C. MacDuffee's Vectors and Matrices (1949 edition), which presented it as a core tool for matrix theory. In the 1940s, the method played a pivotal role in early computing, as John von Neumann and Herman Goldstine analyzed its implementation on electronic machines at the U.S. Army's Ballistics Research Laboratory, addressing scalability for large-scale linear systems in scientific simulations. Concurrently, 1970s scholarship highlighted non-Western precursors, recognizing ancient Chinese fangcheng procedures from The Nine Chapters on the Mathematical Art (circa 200 BCE–200 CE) as early forms of systematic elimination, and 17th-century Japanese determinant methods by Seki Takakazu as independent developments akin to row reduction.

Computational Aspects

Time and space complexity

Gaussian elimination for solving a system of n linear equations in n unknowns, represented by an n \times n dense matrix, requires \Theta(n^3) arithmetic operations in the worst case, with the average case exhibiting similar behavior. The dominant cost arises from the forward elimination phase, which transforms the matrix into upper triangular form via and accounts for approximately \frac{2}{3}n^3 floating-point operations (flops), including roughly \frac{1}{3}n^3 multiplications and an equal number of additions. The subsequent back-substitution phase to solve for the unknowns requires only O(n^2) operations, which is negligible compared to the elimination cost for large n. In terms of space complexity, the algorithm requires O(n^2) storage to hold the input matrix and associated vectors, but it supports an in-place implementation that reuses the original matrix storage without additional asymptotic space beyond this, making it memory-efficient for dense cases. This O(n^2) space bound is inherent to representing the dense matrix and cannot be reduced further without approximations or specialized representations. For dense general matrices, the O(n^3) time complexity of Gaussian elimination is asymptotically optimal in the standard algebraic model, as it aligns with the complexity of related operations like , though faster matrix multiplication algorithms could theoretically reduce it. However, for sparse matrices—where many entries are zero—the standard implementation performs worse than specialized sparse methods, which can achieve complexities as low as O(n \log n) in structured cases like banded or hierarchical sparsity patterns, by minimizing fill-in during elimination.

Bareiss algorithm

The Bareiss algorithm is a variant of Gaussian elimination designed for exact computations over the integers, ensuring that all intermediate values remain integers without introducing fractions. In standard Gaussian elimination applied to integer matrices, divisions typically produce rational numbers, necessitating fraction handling or multiprecision arithmetic to maintain exactness, which can be computationally expensive. The Bareiss algorithm addresses this by employing a division-free update rule that leverages Sylvester's identity to guarantee exact divisions at each step, thereby preserving integrality and minimizing intermediate growth. Developed by Erwin H. Bareiss in 1968, the algorithm was originally motivated by the need for efficient determinant computation of integer matrices, where traditional methods lead to rapid growth in the size of coefficients. Bareiss's approach, detailed in his seminal paper, uses multistep integer-preserving elimination to transform the matrix into an upper triangular form while keeping all entries as integers divisible by the appropriate previous pivots. This method builds on earlier ideas but provides a rigorous framework using to prove the exactness of the divisions. The procedure mirrors forward elimination in Gaussian elimination but modifies the elimination step to avoid fractions. For a matrix A = (a_{ij}) with entries, the algorithm proceeds in stages k = 1 to n-1. At stage k, assuming the submatrix up to row k-1 is already processed, the update for elements i > k, j > k is given by a_{ij}^{(k+1)} = \frac{a_{kk}^{(k)} \, a_{ij}^{(k)} - a_{ik}^{(k)} \, a_{kj}^{(k)}}{a_{k-1,k-1}^{(k-1)}} where the superscript denotes the matrix after stage k, and a_{00}^{(0)} = 1 initializes the division for the first step. Sylvester's identity ensures that the numerator is always divisible by the previous a_{k-1,k-1}^{(k-1)}, resulting in an integer value with no remainder. The at stage k becomes a_{kk}^{(k+1)}, and the process continues until the matrix is upper triangular. The is then the product of the diagonal entries, up to a sign depending on row swaps if pivoting is used. This update preserves integers throughout, with the bit length of entries growing at most quadratically in n for an n \times n , matching the input size bounds without excessive inflation. Unlike standard elimination, it avoids fraction reduction overhead, making it suitable for exact arithmetic in systems. The algorithm maintains O(n^3) , similar to Gaussian elimination, but achieves exactness over the integers \mathbb{[Z](/page/Z)}.

Numerical stability

In , Gaussian elimination is susceptible to rounding errors that can be amplified during the forward elimination phase, particularly when small pivots are selected without row exchanges. Pivot growth occurs as elements in the active submatrix can expand exponentially relative to the original entries, leading to loss of if the growth factor becomes large. For instance, James H. Wilkinson constructed an ill-conditioned where, without pivoting, the growth factor reaches $2^{n-1} for an n \times n , causing catastrophic instability even for moderately sized problems with on the order of $10^{-16}. To mitigate these issues, partial pivoting selects the row with the largest absolute value in the pivot column as the pivot at each step, swapping rows accordingly to bound the growth factor \rho_n \leq 2^{n-1}, though in practice it is typically much smaller (often less than 10 for random matrices). This strategy ensures that the computed solution satisfies a backward error bound where the perturbed matrix \tilde{A} = A + \Delta A has \|\Delta A\| \leq O(n) \epsilon \|A\|, with \epsilon the machine epsilon, limiting error propagation. Full pivoting extends this by also searching for the largest element in the entire active submatrix and swapping both rows and columns, providing stronger stability guarantees with a worst-case growth factor bounded by n^{3/2} 2^{(n-1)/2}, though it incurs higher computational overhead due to the expanded search at each step. Gaussian elimination with partial pivoting is backward stable for the vast majority of matrices encountered in practice, meaning the computed solution \tilde{x} is the exact solution to a slightly perturbed system (A + \Delta A) \tilde{x} = b + \Delta b with small relative perturbations proportional to O(n) \epsilon \kappa(A), where \kappa(A) is the condition number. From the perspective of LU factorization, Gaussian elimination without pivoting computes an decomposition A = [LU](/page/LU_decomposition) where L is unit lower triangular and U upper triangular, but this can fail or be unstable if small pivots lead to division by near-zero elements. Pivoting modifies this to PA = [LU](/page/LU_decomposition) (partial) or PAQ = [LU](/page/LU_decomposition) (full), where P and Q are matrices, ensuring the decomposition remains numerically reliable by controlling element growth in L and U.

Generalizations

Over arbitrary fields

Gaussian elimination, originally formulated for systems over the real numbers, generalizes naturally to matrices with entries in any arbitrary , where F supports , , additive inverses, and multiplicative inverses for all nonzero elements. This extension relies on the field's structure to ensure that the elementary row operations—swapping rows, multiplying a row by a nonzero scalar from F, and adding a scalar multiple of one row to another—remain valid and invertible. Over such a , the algorithm transforms a matrix into or reduced row echelon form, solving linear systems Ax = b with A \in F^{m \times n} and b \in F^m, while determining properties like and nullity. The key adaptation in this setting is the division step during pivot normalization, which requires computing the multiplicative inverse of a nonzero pivot element in F. Since every nonzero element in a field has an inverse, this operation is always possible when a nonzero pivot is selected, avoiding the issues that arise in integral domains or rings lacking universal inverses. For instance, over the field of rational numbers \mathbb{Q}, Gaussian elimination performs exact symbolic computation by representing entries as fractions, where intermediate results are ratios of determinants of submatrices, ensuring polynomial-time growth in bit complexity when fractions are appropriately managed. This contrasts with floating-point arithmetic over \mathbb{R}, as it eliminates rounding errors but may increase computational overhead due to fraction reduction via the Euclidean algorithm. A prominent example occurs over finite fields, such as the Galois field \mathrm{GF}(2) = \{0, 1\} with modulo-2 arithmetic, where addition is XOR and multiplication is AND, eliminating the need for fractions since the characteristic is 2 and inverses are simply the elements themselves (1 is its own inverse). In \mathrm{GF}(2), Gaussian elimination simplifies to bitwise operations, making it efficient for applications like error-correcting codes in coding theory, where matrices represent parity-check equations. The process proceeds without scaling beyond the field elements, directly yielding solutions in binary form. However, limitations emerge outside strict field structures. Over rings like the integers \mathbb{Z}, Gaussian elimination fails because not all nonzero elements (e.g., 2) have multiplicative inverses, preventing reliable division and potentially leading to inconsistent row or loss of solutions. Similarly, in non-commutative rings like the quaternions \mathbb{H}, the algorithm requires modifications to for non-commutative , such as careful ordering in elimination steps to maintain , though full row reduction may not preserve all equivalence properties without additional pivoting strategies. Theoretically, Gaussian elimination over a F preserves : two matrices are row equivalent if one can be obtained from the other via a finite sequence of , each corresponding to left-multiplication by an in \mathrm{[GL](/page/GL)}_m(F), the over F. This ensures that the reduced form uniquely determines the solution space, independent of the path taken in the elimination process.

Block and parallel variants

Block variants of Gaussian elimination partition the into smaller blocks and perform elimination operations block-by-block, enabling the use of high-level operations that improve locality and reduce access overhead on cache-based architectures. This approach, central to the library, replaces scalar row operations with Level 3 BLAS routines for - multiplications and updates, which keep in longer and achieve higher flop rates compared to the classical O(n^3) . For instance, in the blocked with partial pivoting (xGEF2B in ), the A is divided into blocks of size b × b, where the factorization of each panel uses classical Gaussian elimination, followed by block updates to the trailing submatrix. The choice of block size b is critical: larger blocks enhance parallelism and efficiency but may degrade for ill-conditioned matrices, while smaller blocks (e.g., b=16) better preserve bounds on the during pivoting. This blocked strategy scales well for dense matrices up to moderate sizes (n ≈ 10^4) on shared-memory systems, often achieving 80-90% of peak performance on vector processors or superscalar CPUs by minimizing data movement. Parallel variants extend blocked Gaussian elimination to distributed- and shared-memory systems, distributing row and column operations across to achieve near-linear for large-scale problems. In distributed-memory environments, ScaLAPACK implements parallel LU factorization (PDGETRF) using a two-dimensional block-cyclic , which balances computational load and facilitates blocked algorithms while minimizing inter-processor communication. This maps matrix blocks to a of size √p × √p for p processors, allowing each to perform local BLAS-3 updates independently after pivot information. A key challenge in parallelization is pivot selection for , as partial pivoting requires global reduction operations (e.g., finding the maximum in a column across processors), introducing overhead that can limit scalability beyond hundreds of processors. Variants address this through techniques like incremental pivoting, which tiles the panel factorization to expose more concurrency, or tournament pivoting in communication-avoiding (CALU), which uses a reduction for pivots with O(√p) communication phases. These methods, implemented in libraries like for tile-based parallelism on multicore systems, can achieve up to 5x speedup over traditional approaches on architectures like the XT4 for n=10^5. Theoretically, parallel Gaussian elimination yields an O(n^3 / p) arithmetic speedup relative to the sequential baseline, but communication costs—such as O(n^2 log p / √p) words moved in ScaLAPACK—dominate for large p, capping efficiency at 50-70% on petascale machines. Optimal block sizes (b ≈ √M, where M is local ) further mitigate this by aligning with bandwidth limits. In applications, such as simulating or for matrices with n ≈ 10^5 to 10^6, these variants in ScaLAPACK and extensions like enable solving systems on GPU clusters, though scalability plateaus due to latency-bound broadcasts. Trade-offs include increased implementation and potential instability in low-communication variants without refinement steps, prioritizing reliability in scientific codes.

References

  1. [1]
    [PDF] Gaussian Elimination - Purdue Math
    May 2, 2010 · Gaussian elimination is the process of reducing the augmented matrix to row-echelon form and then using back substitution to solve the system.
  2. [2]
    M.7 Gauss-Jordan Elimination | STAT ONLINE
    Gauss-Jordan Elimination is an algorithm that can be used to solve systems of linear equations and to find the inverse of any invertible matrix.
  3. [3]
    [PDF] Lecture 7 - Gaussian Elimination with Pivoting
    Gaussian elimination with pivoting involves partial pivoting, which exchanges rows to avoid zero pivots, and uses the largest pivot for stability.<|control11|><|separator|>
  4. [4]
    Gaussian Elimination - CS-Rutgers University
    Gaussian elimination aims to transform a system of linear equations into an upper-triangular matrix in order to solve the unknowns and derive a solution.
  5. [5]
    About Gauss-Jordan elimination - Math 22a Harvard College Fall 2018
    The Gauss-Jordan algorithm appeared first in the Nine Chapters on the Mathematical Art, which was authored around 300 BC in China.
  6. [6]
    [PDF] Solving a System of Linear Equations Using Ancient Chinese Methods
    The standard modern algorithm is called Gaussian elimination, leading students to believe that it was invented by Carl Friedrich Gauss (1777–1855). However, ...<|control11|><|separator|>
  7. [7]
    Linear Algebra, Part 1: Gaussian Elimination (Mathematica)
    Many people have contributed to Gaussian elimination, including Isaac Newton. However, it was named by the American mathematician George Forsythe (1917--1972) ...
  8. [8]
    9-02 Gaussian Elimination
    In 1888, Wilhelm Jordan discovered a way to extend Gaussian Elimination, so mathematicians have named the process Gauss-Jordan Elimination. Gauss-Jordan ...
  9. [9]
    [PDF] Gaussian elimination
    Oct 2, 2019 · Then we will explain how to perform a series of elementary row operations (the number depends on A, but the largest possibility is something ...
  10. [10]
    Gaussian Elimination — Linear Algebra, Geometry, and Computation
    Gaussian Elimination, also called Gauss-Jordan, has two stages: converting a matrix to echelon form, then to reduced echelon form.
  11. [11]
    Gaussian Elimination and Rank - Ximera - The Ohio State University
    Gaussian elimination uses row operations to transform a matrix into row-echelon form. The rank of a matrix is the number of non-zero rows after this reduction.
  12. [12]
    [PDF] Elementary Row Operations - UC Davis Math
    (Gaussian Elimination) Another method for solving linear systems is to use row operations to bring the augmented matrix to row-echelon form. In row echelon form ...
  13. [13]
    [PDF] MATH 304 Linear Algebra Lecture 2: Gaussian elimination.
    Theorem (i) Applying elementary operations to a system of linear equations does not change the solution set of the system. (ii) Any elementary operation can be ...<|separator|>
  14. [14]
    [PDF] 2.5 Elementary Matrices
    Every elementary matrix E is invertible, and E−1 is also a elementary matrix (of the same type). Moreover, E−1 corresponds to the inverse of the row operation ...
  15. [15]
    None
    **Fact about Row Equivalence and Same Solution Set:**
  16. [16]
    [PDF] Row Space, Column Space, and Null Space - Sites at Lafayette
    Elementary row operations do not change the row space of a matrix. Collectively, Theorems 4.7.3 and 4.7.4 say that, if two matrices A and B are row equivalent– ...
  17. [17]
    [PDF] 1.2 Row Reduction and Echelon Forms - UC Berkeley math
    Echelon form has nonzero rows above zeros, leading entries in columns to the right of the row above, and zeros below leading entries. Reduced echelon form adds ...Missing: authoritative source
  18. [18]
  19. [19]
    [PDF] 4. Gaussian Elimination - Numerical Analysis Lecture Notes
    May 18, 2008 · For example, applying the elementary row operation that adds −2 times the first row to the second row of the 3×3 identity matrix I =. 1 ...
  20. [20]
    [PDF] Lecture 6 - Gaussian Elimination
    Gaussian elimination is a method for solving square systems, transforming them into an upper triangular system for easier solving.
  21. [21]
    [PDF] 2.2 Gaussian Elimination with Backsubstitution - UNLV Physics
    Gaussian elimination reduces a matrix not all the way to the identity matrix, but only halfway, to a matrix whose components on the diagonal and above (say) ...
  22. [22]
    [PDF] GAUSSIAN ELIMINATION - REVISITED Consider solving the linear ...
    We solve it by “forward substitution”. Then we solve the upper triangular system Ux = g by back substi- tution. Page 8. VARIANTS OF GAUSSIAN ELIMINATION. If ...
  23. [23]
    [PDF] Chapter 6 Gaussian Elimination, LU-Factorization, Cholesky ...
    Mar 2, 2025 · We now describe the method of Gaussian Elimination applied to a linear system, Ax = b, where A is assumed to be invertible. We use the variable ...
  24. [24]
    [PDF] Notes on Solving Systems of Linear Equations
    Mar 8, 2007 · From a computational perspective, this process of back substitution can be applied to solve any system of equations when the coefficient matrix ...
  25. [25]
    [PDF] Matrices and Systems of Equations
    If the system is consistent, we can assign the free variables arbitrary values and solve for the lead variables. Therefore, a consistent underdetermined system.
  26. [26]
    [PDF] Contents 2 Matrices and Systems of Linear Equations - Evan Dummit
    Having no free variables is equivalent to each variable being a bound variable, which happens precisely when every column on the left side of the matrix has a ...<|control11|><|separator|>
  27. [27]
    [PDF] Summer Bootcamp: Linear Algebra - NC State ISE
    – Some variables are free (under-determined system). – Geometric: hyperplanes intersect along a line, plane, etc. • No solution: – rank(A) < rank([A | b]).
  28. [28]
    7.6 Solving Systems with Gaussian Elimination - College Algebra 2e
    Dec 21, 2021 · Write the system of equations from an augmented matrix. Perform row operations on a matrix. Solve a system of linear equations using matrices.
  29. [29]
    [PDF] (U.John Hopkins) Matrix Computations (3rd Ed.) [ripped by sabbanji]
    The Solution of Linear Equations by. Elimination and Decomposition Methods. The Sointion of Linear Systems of Equa- tions by Iterative Methods. Errors in the ...
  30. [30]
    [PDF] 1.5 Solution Sets of Linear Systems - UC Berkeley math
    Then the solution set of Ax = b is the set of all vectors of the form w = p + vh, where vh is any solution of the homogeneous equation Ax = 0.<|control11|><|separator|>
  31. [31]
    [PDF] The Gauss-Jordan Method of finding an inverse
    To find the inverse of nxn matrix A, we augment with the Identity to form a nx2n matrix [A I]. We perform Gauss-Jordan reduction on the matrix and the result is ...
  32. [32]
    Gauss-Jordan Elimination - Department of Mathematics at UTSA
    Jan 29, 2022 · Thus it has arithmetic complexity of O(n3); see Big O notation. This arithmetic complexity is a good measure of the time needed for the whole ...Applications · Finding the inverse of a matrix · Computational efficiency
  33. [33]
    [PDF] Golub and Van Loan - EE IIT Bombay
    Similar to matrix-matrix multiplication, Gaussian elimination is a triple-loop procedure ... "Gaussian Elimination Is Stable for the Inverse of a Diagonally.
  34. [34]
    [PDF] Gaussian Elimination and Matrix Inverse - Faculty
    Gaussian elimination transforms a linear system into an equivalent system with an upper triangular matrix using three types of transformations.
  35. [35]
    3.4 Determinants - Understanding Linear Algebra
    More specifically, applying the forward substitution phase of Gaussian elimination to the matrix leads us to an upper triangular matrix so that . A ∼ U .
  36. [36]
    [PDF] Properties of Determinants - Purdue Math
    Feb 16, 2007 · The basic idea is the same as that for Gaussian elimination. We use elementary row operations to reduce the determinant to upper tri- angular ...
  37. [37]
    [PDF] The 3 × 3 case. • Determinants n × n. • Formula for the inverse matrix.
    Gauss elimination can be used to compute determinants! Theorem 2 Let A be a 3 × 3 matrix. • Let B be the result of adding to a row in A a multiple.
  38. [38]
    [PDF] Linear Algebra Primer 1 Introduction - MST.edu
    Apr 1, 2004 · This type of row reduction is called Gaussian elimination and is much more efficient than the cofactor expansion technique for large matrices.
  39. [39]
    [PDF] Linear Algebra Gaussian Elimination Norms Numerical Stability ...
    To compute det(A) factor A = PT LU. det(P) = (−1)s where s is the number of swaps, det(L) = 1. When com-.
  40. [40]
    250L03.html
    Since a matrix is transformed into a unique corresponding reduced row echelon form using Gaussian elimination, the number of nonzero rows in reduced row ...
  41. [41]
    [PDF] 3.5 Dimensions of the Four Subspaces - MIT Mathematics
    So the r pivot rows are a basis for the row space. The dimension of the row space is the rank r. The nonzero rows of R form a basis. 2.
  42. [42]
    [PDF] 14. The rank and nullity - UCSD Math
    The rank of A plus the nullity of A is equal to n. Proof. Apply Gaussian elimination to get the echelon form of A.Missing: determine | Show results with:determine
  43. [43]
    SYS-0030: Gaussian Elimination and Rank - Ximera
    We examine the effect of elementary row operations on the determinant and use row reduction algorithm to compute the determinant. DET-0040: Properties of the ...
  44. [44]
    [PDF] Rank and Nullity - Sites at Lafayette
    Rank and Nullity. Let's think about why our matrix A above had the row space and column space of the same dimension: we reduced A by Gaussian elimination to.
  45. [45]
    [PDF] 18.06.14: `Rank-nullity' - MIT
    Jun 18, 2014 · We remember that the Rank-Nullity theorem only involves the dimension of the image, not the image itself. And good news: row operations don't ...
  46. [46]
    [PDF] 4.5 Row and Column Spaces - Purdue Math
    Gaussian elimination /row operations uncovers linear independence. 2. -2 ... so, here, pivot columns are columns 1 and 2, so columns 1 and 2 are linearly ...
  47. [47]
    notes of finding a basis for the row space of A - Math (Princeton)
    The nonzero rows of a matrix in reduced row echelon form are clearly independent and therefore will always form a basis for the row space of A.
  48. [48]
    [PDF] Leture 28: The Null Space of a Matrix - Ohio University
    In order to find the null space of a given matrix, we solve the homogenous system of linear equations Ax = 0 by Gaussian elimination.
  49. [49]
    [PDF] Linear Systems of m equations for n unknowns - UMD MATH
    Claim: The vectors v(1),...,v(n−r) form a basis for nullA. Gaussian elimination gives Ax =~0 ⇐⇒ Ux =~0. Then back substitution yields x = c1v(1) +c2v(2) with ...
  50. [50]
    Vectors, Matrices, and Gauss-Jordan Elimination - UTSA
    Jan 12, 2022 · This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix.
  51. [51]
    Lecture 16. Bases of vector spaces - Arizona Math
    (3) Find a basis of Row (A). Sol The nonzero rows in RREF(A) are row 1 and row 2. ⇒ A basis of Row(A) is given by row 1, row 2. -3. 2 in RREF(A). 4. Page 4. Ex ...
  52. [52]
    [PDF] Mathematicians of Gaussian Elimination - CIS UPenn
    The coincident papers of Jensen and Dwyer are the earliest to depict Gaussian elimination in roughly the modern form, that is, in terms of Cayleyan matrices.
  53. [53]
    [PDF] On the Histories of Linear Algebra: The Case of Linear Systems
    The first surprise was learning that the idea behind Gaussian elimination preceded Gauss by over 2000 years – there is evidence that the Chinese were using an ...Missing: Takakazu awareness
  54. [54]
    Matrices and determinants - MacTutor History of Mathematics
    Gauss gave a systematic method for solving such equations which is precisely Gaussian elimination on the coefficient matrix. It was Cauchy in 1812 who used ...
  55. [55]
    [PDF] Sylvester-Cayley vector partitions algorithm and the Gaussian ...
    Aug 1, 2023 · We extend an algorithm suggested in 1858 by Sylvester [12] and implemented in 1860 by. Cayley [4] for a problem of double partitions and ...
  56. [56]
    (PDF) Sylvester-Cayley vector partitions algorithm and the Gaussian ...
    Dec 13, 2021 · In the middle of 19th century Sylvester and Cayley suggested an approach based on the variable elimination allowing a reduction of a double ...
  57. [57]
    Earliest Known Uses of Some of the Words of Mathematics (G)
    Mar 20, 2000 · The first one to have named the elimination method Gaussian was probably Bessel in an 1838 report on a surveying campaign in Eastern Prussia. A ...<|separator|>
  58. [58]
    [PDF] Gauss-Jordan Reduction: A Brief History - Mathematics
    Gauss-Jordan reduction, the germ of the idea is already present in the second edition (1877) in the example on pages 34 and 35. as Gauss-Jordan elimination, ...
  59. [59]
    How ordinary elimination became Gaussian elimination
    The familiar method for solving simultaneous linear equations, Gaussian elimination, originated independently in ancient China and early modern Europe.
  60. [60]
    [PDF] How Accurate is Gaussian Elimination?
    Abstract. J.H. Wilkinson put Gaussian elimination (GE) on a sound numerical footing in the 1960s when he showed that with partial pivoting the method is ...
  61. [61]
    How ordinary elimination became Gaussian elimination - OUCI
    General theory of algebraic equations (Tr. E ... matrices. Applied Mathematics Series 39, U.S. ... MacDuffee, C.C., 1949. Vectors and Matrices, third ...
  62. [62]
    John von Neumann's Analysis of Gaussian Elimination and the ...
    Von Neumann and Goldstine described what they surmized would be the significant questions once computers became available for computational science, and they ...
  63. [63]
    [PDF] The Evolution of Gaussian-Type Elimination
    Interestingly, although Gaussian elimination is named after Johann Carl Friedrich Gauss, a famous mathematician who lived in the late 18th century and early 19 ...
  64. [64]
    [PDF] Solving linear systems through nested dissection - Princeton Math
    Gaussian elimination of symmetric matrices can be performed on rows and columns simultane- ously, as long as there is no pivoting. In step i of the elimination, ...
  65. [65]
    None
    - **Abstract/Introduction on Motivation**: Develops integer-preserving Gaussian elimination for systems AX = B, minimizing coefficient magnitudes and improving efficiency over single-step methods. Superior for nearly singular matrices.
  66. [66]
    Matrix Generator WILKINSON - math NIST
    WILKINSON: maximal growth factor for Gauss elimination with partial pivoting ... The case C=1 is Wilkinson's example of a matrix whose growth factor is maximal.
  67. [67]
    [PDF] AVERAGE-CASE STABILITY OF GAUSSIAN ELIMINATION* .)
    Is Gaussian elimination with partial pivoting stable on average? Everything we know on the subject indicates that the answer is emphatically yes, and that ...
  68. [68]
    Average-Case Stability of Gaussian Elimination
    Gaussian elimination with partial pivoting is unstable in the worst case: the “growth factor” can be as large as 2 n − 1 , where n is the matrix dimension.
  69. [69]
    [PDF] Gaussian elimination without rounding
    We assumed that the elements of the original matrix are integers; but during the running of the algorithm, matrices also occur that consist of rational numbers.
  70. [70]
    [PDF] A.1 Basic Algebra - A.1.1 Fields
    Just as we can examine vector spaces over arbitrary fields, so can we define matrices with entries from an arbitrary field. ... of Gaussian elimination then ...
  71. [71]
    Accuracy and stability of quaternion Gaussian elimination
    Apr 28, 2023 · This paper is devoted to the accuracy and stability of quaternion Gaussian elimination (qGE). First, considering the noncommutativity of ...
  72. [72]
    [PDF] Applied Matrix Theory, Math 464/514, Fall 2023
    Sep 13, 2023 · 1.1 Gaussian Elimination Without Pivoting . ... Let F denote an arbitrary field and let A ∈ Fm×n. The matrices A and AT determine linear ...
  73. [73]
    [PDF] Block LU Factorization 1 Introduction - The Netlib
    Feb 16, 1992 · Implementation 1: A11 is factorized by Gaussian elimination with partial pivoting (GEPP). Step 2 and the solution of linear systems with Uii are ...
  74. [74]
    [PDF] A Portable Linear Algebra Library for Distributed Memory Computers
    This paper outlines the content and performance of ScaLAPACK, a collection of mathemat- ical software for linear algebra computations on distributed memory ...Missing: seminal | Show results with:seminal
  75. [75]
    [PDF] On Algorithmic Variants of Parallel Gaussian Elimination - NetLib.org
    The LAPACK block LU factorization is the main point of refer- ence here, and LAPACK naming convention is followed. The LU factorization of a matrix A has ...
  76. [76]
    [PDF] CS 267 Dense Linear Algebra: Parallel Gaussian Elimination
    Mar 4, 2014 · • Data layouts on parallel machines. • Parallel Gaussian Elimination (ScaLAPACK). • Minimizing communication for parallel GE. - Not ScaLAPACK ...