Fact-checked by Grok 2 weeks ago

Schur complement

In linear algebra, the Schur complement of a M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}, where A is a nonsingular square submatrix, is defined as the matrix S = D - C A^{-1} B, which represents the effective contribution of the D block after eliminating the influence of A through Gaussian-like elimination on the partitioned form. This construction preserves key algebraic properties of the original matrix, such as invertibility and definiteness, and extends to cases where D is nonsingular via the symmetric form A - B D^{-1} C. The concept traces its origins to a 1917 lemma by the German mathematician (1875–1941), who utilized it to establish bounds on determinants of matrices with positive real parts in the unit disk, though similar ideas appeared earlier in works by Laplace in 1812. The term "Schur complement" was formally introduced in 1968 by Emilie Virginia Haynsworth (1916–1985) in her research on matrix inequalities, honoring Schur's foundational contribution while highlighting its broader utility in matrix theory. For singular blocks, generalized versions employ the Moore-Penrose pseudoinverse to define S = D - C A^\dagger B, ensuring applicability in degenerate cases. Among its notable properties, the Schur complement relates the determinant of the full matrix to that of its blocks via \det(M) = \det(A) \cdot \det(S), facilitating computations for large systems, and it plays a central role in block matrix inversion formulas, such as M^{-1} = \begin{pmatrix} A^{-1} + A^{-1} B S^{-1} C A^{-1} & -A^{-1} B S^{-1} \\ -S^{-1} C A^{-1} & S^{-1} \end{pmatrix}. For symmetric matrices where the block A is positive definite, M \succeq 0 if and only if the Schur complement S \succeq 0 (noting that M \succeq 0 always implies A \succeq 0 as a principal submatrix), providing a recursive criterion for checking positive semidefiniteness that is essential in optimization and control theory. The Schur complement finds extensive applications across and , including numerical methods for solving linear systems through domain and static condensation in finite element , where it reduces by eliminating internal . In , it underpins analyses of matrices and linear models, enabling inferences in multivariate settings, while in optimization, it supports by verifying feasibility conditions. Its influence extends to for reduced density matrices and to preconditioning techniques in iterative solvers for sparse systems, underscoring its enduring role as a versatile tool in .

History and Definition

Historical Development

The underlying ideas of the Schur complement can be traced back to Pierre-Simon Laplace's work on expansions of determinants for partitioned matrices, with further developments emerging in the through advancements in linear algebra, notably in the context of block for solving partitioned systems of linear equations, a technique building on Carl Friedrich Gauss's elimination method from the early 1800s. A significant early application without explicit naming occurred in James Joseph Sylvester's 1852 paper on quadratic forms, where his law of —establishing the invariance of the number of positive, negative, and zero eigenvalues under real —implicitly relied on the of partitioned matrices akin to the Schur complement. formalized a key central to the concept in his 1917 paper "Über Potenzreihen, die im Innern des Einheitskreises beschränkt sind," published in the Journal für die reine und angewandte Mathematik, using it within matrix theory to study subspaces associated with bounded analytic functions in the unit disk. The term "Schur complement" was introduced by Emilie V. Haynsworth in 1968, honoring Schur's contribution, in her work on determining the of partitioned Hermitian matrices through inequalities involving the complement. Key milestones include Laplace's 1812 determinant expansions, 19th-century foundations in elimination techniques, Sylvester's 1852 , Schur's 1917 lemma, and Haynsworth's 1968 naming, paving the way for its role in modern fields like .

Formal Definition

The Schur complement of a block matrix is a fundamental construct in linear algebra, named after the mathematician Issai Schur, with the term itself coined by Emilie Haynsworth in 1968. Consider a partitioned matrix M of size (m+n) \times (m+n), expressed in block form as M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}, where A is an m \times m square matrix, D is an n \times n square matrix, B is m \times n, and C is n \times m. Assuming D is invertible, the Schur complement of the block D in M, denoted M/D or simply S, is the m \times m matrix given by S = M/D = A - B D^{-1} C. Symmetrically, if A is invertible, the Schur complement of the block A in M is the n \times n matrix M/A = D - C A^{-1} B. In the scalar case, where m = n = 1 and M is a $2 \times 2 \begin{pmatrix} a & b \\ c & d \end{pmatrix} with a \neq 0, the Schur complement of a reduces to the simple expression d - \frac{bc}{a}.

Algebraic Properties

Basic Properties

The Schur complement of a M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}, denoted M/D = A - B D^{-1} C assuming D is invertible, emerges naturally in the process of Gaussian elimination. To solve M \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} b \\ c \end{pmatrix}, first solve for y from the second block equation as y = D^{-1} (c - C x), then substitute into the first block to obtain (A - B D^{-1} C) x = b - B D^{-1} c. Thus, the Schur complement M/D serves as the effective pivot matrix for the remaining system in x. A fundamental identity is the determinant quotient property: if D is invertible, then \det(M) = \det(D) \cdot \det(M/D). This follows from block triangularization of M: M = \begin{pmatrix} I & B D^{-1} \\ 0 & I \end{pmatrix} \begin{pmatrix} M/D & 0 \\ C & D \end{pmatrix}, where the left factor has determinant 1, and the right lower has determinant \det(M/D) \cdot \det(D). The Schur complement also features prominently in formulas for the inverse of a block matrix. Assuming both D and M/D are invertible, the inverse is M^{-1} = \begin{pmatrix} (M/D)^{-1} & -(M/D)^{-1} B D^{-1} \\ -D^{-1} C (M/D)^{-1} & D^{-1} + D^{-1} C (M/D)^{-1} B D^{-1} \end{pmatrix}. This expression, known as the block matrix inversion formula, relates directly to the matrix inversion lemma (or Sherman-Morrison-Woodbury formula) by specializing to rank-one or low-rank updates, where the Schur complement captures the adjustment term. For Hermitian matrices, the additivity of provides a key spectral property: the of M (the triple counting the number of positive, negative, and zero eigenvalues) equals the sum of the inertias of D and M/D, by the Haynsworth inertia additivity formula. This follows from transformations preserving inertia under Sylvester's law, ensuring M and the block-diagonal matrix \operatorname{diag}(M/D, D) share the same .

Determinant and Rank Properties

The determinant of a block-partitioned matrix M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}, where A is an invertible n \times n matrix, satisfies the formula \det(M) = \det(A) \cdot \det(D - C A^{-1} B) = \det(A) \cdot \det(M/A), where M/A = D - C A^{-1} B denotes the Schur complement of the block A in M. Similarly, if the block D is an invertible m \times m matrix, then \det(M) = \det(D) \cdot \det(A - B D^{-1} C) = \det(D) \cdot \det(M/D), with M/D = A - B D^{-1} C being the Schur complement of D in M. This relation, known as Schur's determinant formula, originates from the 1917 work of Issai Schur and holds for matrices over commutative rings, including the real and complex numbers. A standard proof of the for M/A relies on : M = \begin{pmatrix} I_n & 0 \\ C A^{-1} & I_m \end{pmatrix} \begin{pmatrix} A & B \\ 0 & M/A \end{pmatrix}. The left has 1, while the right block-upper-triangular has \det(A) \cdot \det(M/A); thus, \det(M) = \det(A) \cdot \det(M/A). An analogous establishes the formula for M/D. This multiplicative property facilitates computations for large partitioned matrices by reducing to smaller blocks. For rank properties, assume A is invertible. The Guttman rank additivity formula states that \rank(M) = \rank(A) + \rank(M/A). This holds because the aforementioned block factorization implies that the rank of M equals the rank of the block-upper-triangular matrix, whose rank is the sum of the ranks of its diagonal blocks A and M/A. The formula, first published by Louis Guttman in 1946, extends Schur's determinant result to nonsquare or rectangular blocks under the invertibility assumption on the pivot. By the rank-nullity theorem, the nullity (dimension of the ) relation follows directly: \nullity(M) = \nullity(M/A), since A is invertible (so \nullity(A) = 0) and the invertible factors in the preserve kernel dimensions. More precisely, solutions to M \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = 0 reduce to x_1 = -A^{-1} B x_2 and (M/A) x_2 = 0, establishing an between \ker(M) and \ker(M/A). An analogous nullity relation holds when D is invertible: \nullity(M) = \nullity(M/D). When the pivot block is singular, the standard Schur complement requires a , and the relation becomes \rank(M) \geq \rank(A) + \rank(M/A), with under conditions such as the column space of B lying in the column space of A. For instance, consider M = \begin{pmatrix} A & B \\ C & D \end{pmatrix} where A is singular with \rank(A) = n - k for k > 0, and suppose \rank(B) < \min(n, m), meaning B introduces additional rank deficiency in the coupling. If the deficiency in B aligns with the kernel of A such that the effective coupling does not compensate for A's singularity, then \rank(M) = \rank(A) + \rank(D - C A^+ B), where A^+ is the Moore-Penrose pseudoinverse, propagating the combined deficiencies; otherwise, the inequality is strict, and rank propagation depends on the interaction between blocks.

Applications in Linear Algebra

Solving Block Linear Equations

The Schur complement provides an efficient method for solving linear systems expressed in block form, particularly when one of the diagonal blocks is invertible, allowing sequential elimination of variables. Consider the partitioned system \begin{pmatrix} A & B \\ C & D \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} u \\ v \end{pmatrix}, where A is an invertible square matrix. This decomposes into the equations Ax + By = u and Cx + Dy = v. From the first equation, solve for x: x = A^{-1}(u - By). Substitute this expression into the second equation: C A^{-1}(u - By) + Dy = v, which simplifies to (D - C A^{-1} B)y = v - C A^{-1} u. Here, S = D - C A^{-1} B denotes the Schur complement of A in the block matrix. If S is invertible, solve for y: y = S^{-1}(v - C A^{-1} u). Then, back-substitute to obtain x = A^{-1}(u - By). This forward elimination of x followed by backward substitution for x mirrors the process in block Gaussian elimination. The system is solvable if and only if both A and S are invertible, ensuring unique solutions for x and y. To illustrate, consider a simple case with scalar blocks, equivalent to a 2×2 system: A = 2, B = 1, C = 1, D = 3, u = 5, v = 7. The Schur complement is S = 3 - 1 \cdot 2^{-1} \cdot 1 = 2.5. Then, y = 2.5^{-1}(7 - 1 \cdot 2^{-1} \cdot 5) = 0.4 \cdot 4.5 = 1.8, and x = 2^{-1}(5 - 1 \cdot 1.8) = 0.5 \cdot 3.2 = 1.6. This approach reduces the original system to solving a single scalar equation for y, demonstrating the simplification for larger blocks where the dimension of S is smaller than that of A.

Matrix Decompositions and Inverses

The Schur complement is instrumental in deriving the block LDU decomposition for a partitioned matrix M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}, assuming A is invertible. This decomposition expresses M = LDU, where L = \begin{pmatrix} I & 0 \\ CA^{-1} & I \end{pmatrix}, \quad D = \begin{pmatrix} A & 0 \\ 0 & S \end{pmatrix}, \quad U = \begin{pmatrix} I & A^{-1}B \\ 0 & I \end{pmatrix}, and S = D - CA^{-1}B denotes the Schur complement of A in M. This factorization extends the classical LU decomposition to block form, enabling efficient numerical factorization and analysis of structured matrices in applications such as preconditioning. Assuming M is invertible (which requires both A and S to be invertible), the explicit formula for the block inverse is M^{-1} = \begin{pmatrix} A^{-1} + A^{-1}BS^{-1}CA^{-1} & -A^{-1}BS^{-1} \\ -S^{-1}CA^{-1} & S^{-1} \end{pmatrix}. This expression computes M^{-1} by inverting the smaller blocks A and S separately, avoiding full matrix inversion and exploiting sparsity or structure in the blocks for computational savings. The Woodbury matrix identity generalizes such block inversions to low-rank updates, incorporating a Schur-like complement. For compatible matrices A, U, C, and V, it states (A + UCV)^{-1} = A^{-1} - A^{-1}U(C^{-1} + VA^{-1}U)^{-1}VA^{-1}, where C^{-1} + VA^{-1}U functions as the Schur complement in the augmented block matrix formulation. This identity, derived from block elimination, efficiently updates matrix inverses under rank-k modifications (with k small), finding widespread use in iterative methods and statistical updating. In statistical applications, the Schur complement enables efficient inversion of block-structured covariance matrices, such as those in . For a covariance matrix \Sigma = \begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{pmatrix}, the block inverse formula yields the conditional covariance \Sigma_{22|1}^{-1} = S^{-1}, where S = \Sigma_{22} - \Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12} is the Schur complement. This avoids inverting the full \Sigma directly, reducing complexity when \Sigma_{11} is low-dimensional, as in marginalizing over observed data for posterior inference.

Applications in Statistics and Optimization

Multivariate Distributions

In multivariate statistics, the Schur complement plays a central role in deriving conditional distributions, particularly for the multivariate normal distribution, where it directly gives the conditional covariance matrix. For a random vector (X, Y) following a multivariate normal distribution with mean vector (\mu_X, \mu_Y) and positive definite covariance matrix \Sigma = \begin{pmatrix} \Sigma_{XX} & \Sigma_{XY} \\ \Sigma_{YX} & \Sigma_{YY} \end{pmatrix}, the conditional distribution of Y given X = x is multivariate normal with mean \mu_Y + \Sigma_{YX} \Sigma_{XX}^{-1} (x - \mu_X) and covariance matrix equal to the Schur complement \Sigma_{Y|X} = \Sigma_{YY} - \Sigma_{YX} \Sigma_{XX}^{-1} \Sigma_{XY}. This formulation highlights how the Schur complement captures the residual variability in Y after accounting for its dependence on X, assuming \Sigma_{XX} is invertible, which ensures the overall \Sigma is positive definite. The concept extends to the Wishart distribution, which arises as the sampling distribution of the covariance matrix from multivariate normal samples. If a Wishart-distributed matrix W \sim \mathcal{W}_p(n, \Sigma) is partitioned conformably as W = \begin{pmatrix} W_{11} & W_{12} \\ W_{21} & W_{22} \end{pmatrix}, then the Schur complement W_{11} - W_{12} W_{22}^{-1} W_{21} follows a Wishart distribution \mathcal{W}_{p_1}(n - p_2, \Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21}), provided n > p_2 and W_{22} is invertible. This property facilitates sampling from partitioned Wishart matrices and updating posterior distributions in Bayesian models, where block-wise computations simplify the integration over covariance parameters. In , the Schur complement simplifies computations with conjugate priors for multivariate normal models, such as the normal-inverse Wishart prior on the mean and precision matrix. For instance, when updating the posterior for a of parameters conditional on others, the Schur complement of the prior precision block yields the conditional posterior precision, enabling efficient without full matrix inversions. This approach is particularly useful in hierarchical models, where it streamlines the propagation of information across levels while preserving conjugacy.

Covariance and Positive Definiteness Conditions

A fundamental application of the Schur complement arises in determining the positive definiteness of block matrices, particularly in the context of matrices that must be positive definite to ensure valid probabilistic interpretations in multivariate distributions. For a symmetric M = \begin{pmatrix} A & B \\ B^T & C \end{pmatrix} where A is positive definite, M is positive definite A > 0 and the Schur complement M/A = C - B^T A^{-1} B > 0. This criterion extends recursively, allowing verification of larger matrices by successive Schur complements of principal submatrices. For positive semidefiniteness, the condition generalizes to cases where A may be singular. Specifically, M \succeq 0 A \succeq 0, B^T (I - A A^\dagger) = 0, and the Schur complement M/A = C - B^T A^\dagger B \succeq 0, where A^\dagger denotes the Moore-Penrose pseudoinverse of A. This formulation is crucial for covariance matrices in statistical models, where semidefiniteness accommodates degenerate distributions, ensuring the matrix remains a valid Gram matrix for inner products. In optimization, Schur complements facilitate the reformulation of quadratic constraints into linear matrix inequalities (LMIs) for (SDP). For instance, the quadratic inequality x^T P x + 2 q^T x + r \geq 0 with P \succ 0 is equivalent to the LMI \begin{pmatrix} P & q \\ q^T & r - q^T P^{-1} q \end{pmatrix} \succeq 0, where the bottom-right block is the Schur complement, enabling efficient SDP solvers to check feasibility. This technique is widely used in problems involving estimation, where maintaining positive semidefiniteness ensures algorithmic stability and . As an illustrative example, consider verifying the of a \Sigma = \begin{pmatrix} 1 & 0.5 & 0.3 \\ 0.5 & 2 & 0.4 \\ 0.3 & 0.4 & 1.5 \end{pmatrix}, which might represent variances and covariances in a trivariate . Partition \Sigma with the leading 1×1 block A = {{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} > 0; the Schur complement is \Sigma / A = \begin{pmatrix} 2 - 0.5^2 & 0.4 - 0.5 \cdot 0.3 \\ 0.4 - 0.5 \cdot 0.3 & 1.5 - 0.3^2 \end{pmatrix} = \begin{pmatrix} 1.75 & 0.25 \\ 0.25 & 1.41 \end{pmatrix}, with eigenvalues approximately 1.88 and 1.28, both positive. Next, take the leading 1×1 block of this 2×2 matrix, A' = [1.75] > 0, yielding the final Schur complement \Sigma / A / A' = [1.41 - 0.25^2 / 1.75] \approx [1.37] > 0. Thus, \Sigma > 0 by recursive application of the criterion. The Schur complement in this setting also connects to conditional covariance matrices from multivariate distributions, providing a bridge between algebraic definiteness checks and probabilistic conditioning.

Advanced Generalizations

Higher-Order Block Matrices

The Schur complement extends naturally to higher-order block matrices through recursive application, where a matrix partitioned into multiple blocks along the diagonal is reduced by eliminating one block at a time. For an n \times n block matrix M with diagonal blocks A_1, A_2, \dots, A_n and off-diagonal blocks accordingly, the process begins by computing the Schur complement of the first block A_1, yielding a reduced (n-1) \times (n-1) block matrix, and continues iteratively until a scalar or smaller form remains. This iterative elimination preserves the equivalence of the original system to the final reduced form, analogous to block . For a specific 3×3 partitioned as M = \begin{pmatrix} A & B & F \\ C & D & E \\ I & G & H \end{pmatrix}, the Schur complement with respect to the compound block \begin{pmatrix} D & E \\ G & H \end{pmatrix} (assuming it is invertible) is given by M / \begin{pmatrix} D & E \\ G & H \end{pmatrix} = A - \begin{bmatrix} B & F \end{bmatrix} \begin{pmatrix} D & E \\ G & H \end{pmatrix}^{-1} \begin{pmatrix} C \\ I \end{pmatrix}. This formula arises from successive applications: first, form the Schur complement of H in the lower 2×2 block to reduce it, then apply the basic Schur operation to eliminate the intermediate block relative to A. Such compound block complements facilitate analysis of larger systems without full inversion at each step. The determinant and rank properties of the Schur complement generalize to higher-order cases via the chain rule of successive applications. Specifically, for an n \times n block matrix, \det(M) = \det(A_1) \det(M / A_1) = \det(A_1) \det(A_2 / [A_1]) \cdots \det(A_n / [A_1, \dots, A_{n-1}]), where each term is the Schur complement with respect to the preceding diagonal blocks. Similarly, the rank satisfies \operatorname{rank}(M) = \operatorname{rank}(A_1) + \operatorname{rank}(M / A_1), extending additively through the recursion, which aids in assessing singularity or deficiency in multi-block structures. These generalizations hold under the invertibility assumptions on the eliminated blocks and follow directly from the 2×2 case iterated appropriately. As an illustrative example, consider reducing a 4×4 K partitioned into 2×2 blocks: K = \begin{pmatrix} P & Q \\ R & S \end{pmatrix}, where each entry is 2×2. First, compute the Schur complement K / P = S - R P^{-1} Q, assuming P invertible, yielding a 2×2 . Then, partition this complement as \begin{pmatrix} a & b \\ c & d \end{pmatrix} and apply the Schur complement again with respect to a, resulting in the scalar d - c a^{-1} b. The final scalar equals \det(K) / \det(P) up to the intermediate determinant factor, demonstrating the recursive reduction to assess overall properties like invertibility.

Variants in Numerical Methods

In numerical methods for electrical networks, the Kron reduction technique employs the Schur complement to eliminate internal nodes from graph Laplacians, thereby simplifying the network model while preserving its effective impedance and power flow properties between boundary nodes. This process computes the reduced Laplacian as the Schur complement of the original weighted Laplacian matrix with respect to the internal nodes, resulting in a smaller system that maintains the network's external behavior for analysis and simulation. Originally developed in circuit theory, this variant has been extended to , enabling efficient computations in power grid models where internal bus eliminations reduce the dimensionality from thousands to hundreds of nodes without significant loss of accuracy. Schur complements also play a central in preconditioners for iterative solvers within , where the global system is partitioned into subdomains, and the problem is approximated by a Schur complement to accelerate . In particular, Schur complement preconditioning in non-overlapping domain decomposition approximates the Schur complement using local solves on subdomains, often combined with low-rank corrections to handle sparsity and improve conditioning for elliptic PDEs. This approach has been shown to reduce iteration counts in conjugate gradient methods by factors of 2-5 for large-scale problems, making it suitable for parallel implementations in finite element simulations. In , Schur complements facilitate for state-space systems by eliminating non-essential states, yielding lower-dimensional realizations that approximate the original system's input-output behavior. For linear time-invariant systems described by matrices A, B, C, D, the Schur complement can be applied to the descriptor form to reduce the order while preserving and , as demonstrated in balanced truncation variants. This method is particularly effective for high-fidelity simulations in and systems, where reductions from order 1000 to 50 maintain error bounds below 1% in . A practical example of Schur complement implementation arises in solving for finite element methods, where block elimination via the Schur complement reduces the of solvers from O(n^3) for the full system to O(n^2) or better for the reduced problem in structured meshes. In mixed-hybrid finite element approximations for elasticity or flow problems, the Schur complement on the is assembled from local macroelement contributions, minimizing fill-in and enabling efficient multifrontal . For a problem discretized on n nodes, this variant achieves a complexity of O(n^{1.5}) using nested dissection, compared to O(n^2) without reduction, as verified in preconditioned iterative frameworks.

References

  1. [1]
    [PDF] Partitioned Matrices and the Schur Complement - Quickfem
    This Appendix collects some basic material on the subject. Emphasis is placed on results involving the Schur complement.
  2. [2]
    [PDF] The Schur Complement and Symmetric Positive Semidefinite (and ...
    Aug 24, 2019 · x = (A − BD−1C)−1(c − BD−1d) y = D−1(d − C(A − BD−1C)−1(c − BD−1d)). The matrix, A − BD−1C, is called the Schur Complement of D in M.
  3. [3]
    The Schur Complement and Its Applications
    **Summary of Schur Complement from "The Schur Complement and Its Applications"**
  4. [4]
    the schur complement and its applications
    Mar 29, 2003 · This portrait of Issai Schur was apparently made by the "Atelieir Hanni. Schwarz, N. W. Dorotheenstrafie 73" in Berlin, c. 1917, and appears in.
  5. [5]
    Determination of the inertia of a partitioned Hermitian matrix
    Linear Algebra and its Applications, Volume 1, Issue 1, January 1968, Pages 73-81, Determination of the inertia of a partitioned Hermitian matrix.
  6. [6]
    What Is the Schur Complement of a Matrix? - Nick Higham
    Jun 1, 2023 · Most often, it is the Schur complement of in that is of interest, and the general case can be reduced to it by row and column permutations that ...Missing: formal | Show results with:formal
  7. [7]
    Schur complement - StatLect
    The Schur complements of a block matrix are functions of its blocks that allow us to derive several useful formulae for the inversion and the factorization of ...Definition · Role in block matrix inversion · Factorization of the inverse of...
  8. [8]
    Matrix Computations - 4th Edition | SIAM Publications Library
    Van Loan's classic is an essential reference for computational scientists and engineers in addition to researchers in the numerical linear algebra community.
  9. [9]
    Matrix Analysis - Cambridge University Press & Assessment
    Roger A. Horn, The Johns Hopkins University, Charles R. Johnson, College of William and Mary, Virginia. Publisher: Cambridge University Press.
  10. [10]
    Inverting modified matrices - ScienceOpen
    Inverting modified matrices. Author(s): Woodbury, M. A. WOODBURY, MA Woodbury, M.A. Woodbury, M. Woodbury. Publication date: 1950. Journal: Memo Rep no. Read ...
  11. [11]
    [PDF] On the Usage of Gaussian Process for Efficient Data Valuation - arXiv
    Jun 4, 2025 · When adding a datum i, the inverse of the matrix ˜KA∪{i} can be obtained using the Schur complement formula knowing only the inverse of KA – see ...
  12. [12]
    Marginal and conditional distributions of a multivariate normal vector
    This lecture discusses how to derive the marginal and conditional distributions of one or more entries of a multivariate normal vector.
  13. [13]
    [PDF] The multivariate normal distribution - MyWeb
    Schur complement. • The quantity Σ11 − Σ12Σ. −1. 22 Σ21 is known in linear algebra as the Schur complement; it comes up all the time in statistics and we ...
  14. [14]
    [PDF] arXiv:1512.08159v1 [math.ST] 27 Dec 2015
    Dec 27, 2015 · We show that the distribution of the scalar Schur complement in a noncentral Wishart matrix is a mixture of central chi-square distributions ...
  15. [15]
    [PDF] CPSC540 - UBC Computer Science
    How to place Wishart priors on the covariance to derive the Bayesian posterior distribution of the covariance matrix. Page 33. Next class. Nando de Freitas.
  16. [16]
    [PDF] Chapter 9 The exponential family: Conjugate priors - People @EECS
    The subjective Bayesian perspective takes the opti- mistic view that priors are an opportunity to express knowledge; in particular, a prior may be a posterior ...
  17. [17]
    Kron Reduction of Graphs with Applications to Electrical Networks
    Feb 15, 2011 · In this paper we propose a general graph-theoretic framework for Kron reduction that leads to novel and deep insights both on the mathematical and the physical ...
  18. [18]
    [PDF] Schur complement based domain decomposition preconditioners ...
    This paper presented a preconditioning method, named SLR, based on a Schur complement approach with low-rank corrections for solving symmet- ric sparse ...
  19. [19]
    [PDF] 1 Schur complement preconditioners for distributed general sparse ...
    This paper discusses the Schur complement viewpoint when developing paral- lel preconditioners for general sparse linear systems. Schur complement meth- ods ...
  20. [20]
    The Schur Complement and Its Applications - ResearchGate
    The Schur complement method represents the foundational form of domain decomposition (Zhang, 2006) . This method usually divides a finite element problem into ...
  21. [21]
    Schur complements and state space realizations - Academia.edu
    Motivated by state space realizations of transfer functions from system theory, a number of operations on Schur complements are introduced and studied.
  22. [22]
    Schur Complement Systems in the Mixed-Hybrid Finite Element ...
    The Schur complements are based on a two-by-two block decomposition of the matrix, and their approximations are computed by assembly of local (macroelement) ...
  23. [23]
    [PDF] Sparse approximations of the Schur complement for parallel ...
    In this work we investigate the use of sparse approximation of the dense local Schur complements. These approximations are computed using a partial incomplete ...