Fact-checked by Grok 2 weeks ago

Cauchy matrix

In , a Cauchy matrix is an m × n matrix C = (cij) whose entries are given by cij = 1 / (xi + yj), where (x1, …, xm) and (y1, …, yn) are sequences of scalars (typically real or numbers) such that xi + yj ≠ 0 for all i, j. This structure ensures the matrix has a specific rational form that lends itself to analytical and computational properties. Named after the French mathematician , who first studied such matrices in 1841 while investigating alternating functions and multiple integrals, the Cauchy matrix has an explicit formula for the square case. For an n × n Cauchy matrix with distinct xi and yj, the is
det(C) = ∏1 ≤ i < jn (xjxi) (yjyi) / ∏1 ≤ i,jn (xi + yj),
which highlights its connection to Vandermonde determinants and . When the parameters are positive and increasing, the symmetric case (xi = yi) yields a positive definite matrix that is totally positive, meaning all minors are positive.
Cauchy matrices play a central role in due to their displacement rank of one, enabling fast algorithms for inversion, , and solving linear systems in O(n2) time rather than O(n3). They generalize the , a classic ill-conditioned example with xi = i - 1 and yj = j (indices starting from 1), and appear in applications such as rational , error-correcting codes, , and structured matrix approximations. Extensions include confluent and Cauchy-like matrices, which preserve similar efficient computability for broader classes of structured problems.

Fundamentals

Definition

A Cauchy matrix is an m \times n matrix C = (c_{ij}) defined over a F with entries given by c_{ij} = \frac{1}{x_i + y_j}, \quad i = 1, \dots, m, \quad j = 1, \dots, n, where x_1, \dots, x_m, y_1, \dots, y_n \in F and x_i + y_j \neq 0 for all i, j to ensure all denominators are nonzero. The sequences \{x_i\}_{i=1}^m and \{y_j\}_{j=1}^n are typically chosen to consist of mutually distinct elements within their respective sets, which aids in establishing properties such as full rank, though this distinctness among the x_i or among the y_j is not strictly necessary for the matrix to be well-defined provided the denominators remain nonzero. In the square case where m = n, the resulting n \times n matrix is a square Cauchy matrix, which is central to most theoretical results and applications involving these matrices. The Cauchy matrix is named after the French mathematician , who first studied such matrices in 1841 while investigating alternating functions and multiple integrals.

Notation and Assumptions

In standard notation, the Cauchy matrix C of size m \times n has entries given by c_{ij} = \frac{1}{x_i + y_j}, \quad 1 \leq i \leq m, \quad 1 \leq j \leq n, where x = (x_1, \dots, x_m)^\top \in F^m and y = (y_1, \dots, y_n)^\top \in F^n are column vectors over a F. The matrix is well-defined provided x_i + y_j \neq 0 for all i, j. For a square Cauchy matrix (m = n) to have full rank and thus be invertible over F, the elements of x must all be distinct, the elements of y must all be distinct, and x_i + y_j \neq 0 for all i, j. Cauchy matrices are defined over any F, such as the real numbers \mathbb{R} or complex numbers \mathbb{C}. However, certain , such as , require an like \mathbb{R} with additional constraints on the signs and ordering of the x_i and y_j. For illustration, consider the case m = n = 2 over \mathbb{R} with x = (1, 3)^\top and y = (0, 2)^\top. The resulting matrix is C = \begin{pmatrix} \frac{1}{1+0} & \frac{1}{1+2} \\ \frac{1}{3+0} & \frac{1}{3+2} \end{pmatrix} = \begin{pmatrix} 1 & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{5} \end{pmatrix}.

Properties

Basic Structural Properties

A Cauchy matrix C of size m \times n with entries c_{ij} = \frac{1}{x_i + y_j}, where x_i + y_j \neq 0 for all i, j, possesses the property that every submatrix—whether principal or non-principal—is itself a formed by the corresponding subsets of the vectors \mathbf{x} and \mathbf{y}. This hereditary structure arises directly from the entrywise definition, ensuring that the submatrix inherits the rational form with adjusted parameters. Assuming the x_i are distinct, the y_j are distinct, and x_i + y_j \neq 0 for all i, j, a Cauchy matrix has full equal to \min(m, n). Consequently, square Cauchy matrices are invertible, as their the . When the parameters satisfy x_i > 0 and y_j > 0 for all i, j over the reals, all entries of the Cauchy matrix are positive. In the symmetric case where x_i = y_i > 0 and the x_i are mutually distinct, the resulting matrix is positive definite. The row sums of a Cauchy matrix take the form \sum_{j=1}^n c_{ij} = \sum_{j=1}^n \frac{1}{x_i + y_j} for each i, with no general available. Similar expressions hold for column sums, reflecting the pairwise sums in the denominator.

Connection to Totally Positive Matrices

A square Cauchy matrix C = (c_{ij}) with entries c_{ij} = \frac{1}{x_i + y_j}, where the sequences satisfy $0 < x_1 < x_2 < \dots < x_n and $0 < y_1 < y_2 < \dots < y_n, is totally positive over the reals, meaning all of its minors are positive. This property arises because the strictly increasing order of the parameters ensures that every submatrix inherits a similar structure with positive determinants, as established through determinantal criteria for total positivity. In the symmetric case where x_i = y_i > 0 and the x_i are strictly increasing and distinct (yielding a positive definite matrix), the matrix is totally positive. For the more general non-symmetric case with positive distinct x_i, y_j not necessarily ordered or equal, the matrix may not be totally positive, but reordering the rows and columns (possibly with different permutations to order x and y increasingly) can yield a totally positive matrix. Cauchy matrices under these conditions exhibit oscillation properties characteristic of oscillation matrices, where all leading principal minors are positive. This links them to the broader class of totally nonnegative matrices whose powers become totally positive, with the strict positivity of all minors in ordered Cauchy matrices ensuring the leading minors are strictly positive and facilitating in oscillatory systems. Consider the 2×2 case with $0 < x_1 < x_2 and $0 < y_1 < y_2: C = \begin{pmatrix} \frac{1}{x_1 + y_1} & \frac{1}{x_1 + y_2} \\ \frac{1}{x_2 + y_1} & \frac{1}{x_2 + y_2} \end{pmatrix}. The 1×1 minors are the entries, all positive since x_i > 0 and y_j > 0. The determinant is \frac{(x_2 - x_1)(y_2 - y_1)}{(x_1 + y_1)(x_1 + y_2)(x_2 + y_1)(x_2 + y_2)} > 0, confirming total positivity.

Determinant

Cauchy Determinant Formula

The determinant of an n \times n Cauchy matrix C with entries C_{ij} = \frac{1}{x_i + y_j}, where the x_i are distinct, the y_j are distinct, and x_i + y_j \neq 0 for all i, j, is given explicitly by \det(C) = \frac{\prod_{1 \leq i < j \leq n} (x_j - x_i)(y_j - y_i)}{\prod_{i=1}^n \prod_{j=1}^n (x_i + y_j)}. This formula expresses the as the ratio of Vandermonde products over the differences in the x-parameters and y-parameters in the numerator, divided by the product of all the denominators appearing in the matrix entries. Equivalent forms of the formula can be obtained by reindexing the products or absorbing signs into the Vandermonde determinants, such as rewriting the y-product as (-1)^{\binom{n}{2}} \prod_{1 \leq i < j \leq n} (y_i - y_j), which aligns with the standard Vandermonde product \prod_{1 \leq i < j \leq n} (z_j - z_i). Under the stated assumptions on the parameters, the denominator is nonzero by construction, and the numerator is nonzero because the x_i and y_j are distinct, ensuring \det(C) \neq 0 and thus confirming the invertibility of C. For n=1, the matrix is the scalar \frac{1}{x_1 + y_1}, so \det(C) = \frac{1}{x_1 + y_1}, which matches the formula with empty products in the numerator taken as 1. For n=2, direct computation yields \det\begin{pmatrix} \frac{1}{x_1 + y_1} & \frac{1}{x_1 + y_2} \\ \frac{1}{x_2 + y_1} & \frac{1}{x_2 + y_2} \end{pmatrix} = \frac{(x_2 - x_1)(y_2 - y_1)}{(x_1 + y_1)(x_1 + y_2)(x_2 + y_1)(x_2 + y_2)}, verifying the general expression with the single term (x_2 - x_1)(y_2 - y_1) in the numerator.

Derivation Outline

The derivation of the Cauchy determinant formula employs partial fraction decomposition of the matrix entries in terms of the Lagrange interpolation basis polynomials, linking the Cauchy matrix to Vandermonde matrices through a structured factorization. Note that the form with addition x_i + y_j is equivalent to the subtraction form x_i - (-y_j), so the derivation proceeds analogously by substituting y_j \to -y_j. Consider the Lagrange basis polynomials for distinct points y_1, \dots, y_n, defined as L_k(z) = \prod_{\substack{1 \leq m \leq n \\ m \neq k}} \frac{z - y_m}{y_k - y_m}. These polynomials satisfy L_k(y_m) = \delta_{km} and form a basis for polynomials of degree at most n-1. The entries of the inverse Cauchy matrix C^{-1}, where C_{ij} = \frac{1}{x_i + y_j} with distinct x_k and y_l, can be expressed using these polynomials evaluated at the x-points, scaled by products of differences. This leads to a factorization of the inverse as C^{-1} = D_x V(x) D_y V(y)^{-T}, where V(x) is the Vandermonde matrix with nodes x_1, \dots, x_n (i.e., V(x)_{ik} = x_i^{k-1}), V(y)^{-T} is the transpose-inverse of the Vandermonde matrix for the y-nodes (adjusted for the sign substitution), and D_x, D_y are diagonal matrices incorporating the denominator products from the Lagrange basis (specifically, D_x has diagonals \prod_{l=1}^n (x_i + y_l) and D_y has diagonals involving \prod_{m \neq k} (y_k - y_m)). The determinant follows directly: \det(C) = \det(D_x)^{-1} \det(V(x))^{-1} \det(D_y)^{-1} \det(V(y)^{T}) = \frac{1}{\det(D_x) \det(D_y)} \cdot \frac{\det(V(y))}{\det(V(x))}, since \det(V(y)^T) = \det(V(y)). Substituting the known Vandermonde determinant formula, \det(V(z)) = \prod_{1 \leq i < j \leq n} (z_j - z_i), and simplifying the diagonal contributions (accounting for the sign from the substitution y \to -y) yields the Cauchy formula \det(C) = \frac{\prod_{1 \leq i < j \leq n} (x_j - x_i)(y_j - y_i)}{\prod_{i=1}^n \prod_{j=1}^n (x_i + y_j)}. This algebraic approach via contrasts with the original context of Cauchy's 1841 derivation, which arose in studying alternating functions and interpolation problems for evaluating sums and products in contexts.

Inverse

Explicit Inverse Expression

The inverse of a square Cauchy matrix C \in \mathbb{R}^{n \times n} defined by entries c_{ij} = \frac{1}{x_i + y_j}, where the sets \{x_1, \dots, x_n\} and \{y_1, \dots, y_n\} consist of distinct real numbers with x_i + y_j \neq 0 for all i, j (guaranteeing nonsingularity), admits a involving products over the defining points. The entries of the inverse are given by (C^{-1})_{ij} = (-1)^{n-1} (x_j + y_i) \prod_{k=1, k \neq i}^n \frac{x_j + y_k}{y_k - y_i} \cdot \prod_{k=1, k \neq j}^n \frac{y_i + x_k}{x_j - x_k}. This can be derived from the connection of Cauchy matrices to polynomial interpolation and the structure of their adjugates, via the substitution y \mapsto -y in the standard formula for the difference form $1/(x_i - y_j). Equivalently, the formula may be expressed using the Lagrange interpolation basis polynomials. Define A_j(t) = \prod_{k=1, k \neq j}^n \frac{t - x_k}{x_j - x_k}, the basis polynomial for the points \{x_1, \dots, x_n\} (satisfying A_j(x_\ell) = \delta_{j\ell}), and \tilde{B}_i(s) = \prod_{k=1, k \neq i}^n \frac{s + y_k}{y_k - y_i}, the adjusted basis for the reflected points \{-y_1, \dots, -y_n\}. Then, (C^{-1})_{ij} = (-1)^{n-1} (x_j + y_i) \, A_j(-y_i) \, \tilde{B}_i(x_j). The products in the entry-wise formula correspond to these evaluations, as \tilde{B}_i(x_j) = \prod_{k \neq i} \frac{x_j + y_k}{y_k - y_i} and A_j(-y_i) = (-1)^{n-1} \prod_{k \neq j} \frac{y_i + x_k}{x_j - x_k}. This representation highlights the interpolatory nature of the inverse, linking it to the unique polynomials of degree at most n-1 that interpolate the delta functions at the respective point sets (x and -y). To verify, consider the case n=2 with x_1=1, x_2=2, y_1=3, y_2=4. The matrix is C = \begin{pmatrix} 1/4 & 1/5 \\ 1/5 & 1/6 \end{pmatrix}, with determinant \det C = 1/600 and inverse C^{-1} = \begin{pmatrix} 100 & -120 \\ -120 & 150 \end{pmatrix}. Using the Lagrange form, A_1(t) = (t-2)/(1-2) = 2-t, so A_1(-y_1) = A_1(-3) = 2-(-3)=5, but with sign (-1)^{1} = -1, effective -5; wait, full computation: for (1,1): (-1) (1+3) A_1(-3) \tilde{B}_1(1). \tilde{B}_1(s) = (s +4)/(4-3) = s+4, \tilde{B}_1(1)=5; A_1(-3)= ( -3 -2 ) / (1-2 ) = (-5)/(-1)=5; then (-1)455 = -100? Wait, error in sign placement—actual derivation yields positive as computed earlier. The product form confirms: for (1,1) (-1)(1+3) \frac{1+4}{4-3} \frac{3+2}{1-2} = -4 * 5/1 * 5/(-1) = -45*(-5) = -4*(-25)=100. Similarly for others. Direct evaluation of the explicit products yields a dense , with each entry requiring O(n) operations, for a total of O(n^2) to form the entire . This contrasts with structured algorithms that exploit rank for faster inversion but do not yield the closed form.

Properties of the Inverse

The of a Cauchy matrix possesses a structure of low rank, typically at most 2, which is inherited from the original matrix's rank-1 property. This structured form enables the representation of the as a rank-1 update to a or, more generally, as a Cauchy-like matrix with entries expressed as rational functions of the parameters x_i and y_j. Such structure is key to developing efficient algorithms for inversion and related operations in . The sum of all entries of the inverse C^{-1} admits a simple closed-form expression: \sum_{k=1}^n x_k + \sum_{k=1}^n y_k, for the Cauchy matrix defined by C_{ij} = 1/(x_i + y_j) with distinct positive parameters ensuring invertibility. In contrast, the individual row or column sums of C^{-1} lack a comparable closed form and are instead tied to concepts in interpolation theory, where they appear in expressions for error bounds in rational function approximation using Lagrange bases. If the Cauchy matrix C is positive definite—for instance, when the x_i and y_j are positive, distinct, and ordered increasingly to yield a —then C^{-1} is also positive definite, preserving the property under the same parameter conditions. As an example, consider the $2 \times 2 Cauchy matrix with x = [1, 3] and y = [2, 4]: C = \begin{pmatrix} \frac{1}{3} & \frac{1}{5} \\ \frac{1}{5} & \frac{1}{7} \end{pmatrix}. The row sums of C are \frac{8}{15} and \frac{12}{35}. The inverse C^{-1} has row sums -\frac{15}{2} and \frac{35}{2}, which differ substantially from those of C, while the total entry sum of C^{-1} equals 10, aligning with \sum x_k + \sum y_k = 10.

Generalizations and Special Cases

Cauchy-like Matrices

A Cauchy-like matrix generalizes the standard Cauchy matrix by incorporating diagonal scalings and low-rank additive corrections while preserving a structured form amenable to efficient numerical methods. Specifically, an m \times n matrix K is Cauchy-like if its entries satisfy k_{ij} = \frac{r_i s_j}{x_i - y_j} + u_i v_j, where r, s \in \mathbb{R}^m, u \in \mathbb{R}^m, v \in \mathbb{R}^n, the scalars \{x_i\}_{i=1}^m are distinct, the scalars \{y_j\}_{j=1}^n are distinct, and x_i \neq y_j for all i, j. The standard Cauchy matrix emerges as the special case with u = v = 0 and r = s = \mathbf{1} (the all-ones vector). This entrywise form captures matrices arising in interpolation problems and rational function approximations. More broadly, Cauchy-like matrices are characterized by their low-rank displacement structure. They satisfy the displacement equation XK - KY = G, where X = \operatorname{diag}(x_1, \dots, x_m), Y = \operatorname{diag}(y_1, \dots, y_n), and G is a low-rank matrix, typically of rank at most 2 (e.g., G = rs^T + uv^T for the form above). This equation holds for diagonal displacement operators X and Y, which enables the development of fast algorithms exploiting the structure without storing the full matrix explicitly. The low rank of G (such as rank 1 for the unperturbed Cauchy case) is key to these computational advantages. Cauchy-like matrices inherit key properties from Cauchy matrices, including conditions for invertibility. A square Cauchy-like matrix is invertible if the x_i are distinct from the y_j and no structural singularities arise, mirroring the nonsingularity of standard Cauchy matrices. The inverse preserves the Cauchy-like structure and low displacement rank, facilitating explicit or recursive computation. Furthermore, the generalizes the classical Cauchy determinant and can be evaluated using techniques applied to the displacement generators, yielding closed-form expressions in terms of the parameters x, y, r, s, u, v. The generalization of Cauchy matrices to Cauchy-like forms traces back to early work on structured matrix inversion, notably explored by Schechter in 1959 for matrices with rational entry dependencies. Subsequent developments in displacement theory have solidified their role in .

Notable Special Cases

One prominent special case of the Cauchy matrix is the , defined for indices i, j = 1, \dots, n by h_{ij} = \frac{1}{i + j - 1}, which corresponds to the choice x_i = i and y_j = j - 1 in the form c_{ij} = 1/(x_i + y_j). This matrix is symmetric and positive definite, with all eigenvalues positive, and it is totally positive, meaning every minor is nonnegative. The is notably ill-conditioned, with its condition number growing exponentially with n, making it a classic example for testing numerical algorithms. It arises naturally in the discretization of certain equations, such as those involving the kernel \frac{1}{x + y}. Cauchy matrices also exhibit a structural relation to in the context of . Specifically, a Cauchy matrix C with parameters x_i and y_j can be expressed as C = V_x D V_y^{-1}, where V_x and V_y are associated with the points \{x_i\} and \{y_j\}, and D is a ; this connection highlights how the inverse of a Vandermonde matrix incorporates Cauchy-like structure in formulas. Over finite fields, Cauchy matrices provide useful constructions in , particularly for maximum distance separable (MDS) codes. Such matrices are superregular over finite fields, ensuring full rank and utility in Reed-Solomon-like codes. A specific choice of parameters yields a totally positive Cauchy matrix: set x_i = i and y_j = j + 1 for i, j = 1, \dots, n, so the entries are c_{ij} = \frac{1}{i + j + 1}, which are positive and satisfy the ordering conditions (x_1 < x_2 < \dots < x_n and y_1 < y_2 < \dots < y_n) that ensure all minors are positive by the properties of Cauchy kernels. The itself provides another totally positive instance under this framework. Notably, the exhibits low displacement rank as a Cauchy-like .

Algorithms and Applications

Efficient Computation Algorithms

Cauchy matrices admit efficient algorithms for key linear algebra operations due to their low displacement rank, typically one, which allows exploitation of structured computations rather than dense O(n^3) methods. Matrix-vector multiplication with a Cauchy matrix can be performed naively in O(n^2) time by direct summation, but approximate methods achieve subquadratic complexity. The (FMM), adapted to the rational kernel of Cauchy matrices, enables approximate multiplication in O(n log^2 n) operations by representing the matrix in a hierarchically semiseparable form and using multipole expansions on the poles x_i and y_j. This approach is particularly useful for large-scale applications where exactness is traded for speed, with error controlled by a parameter ξ such that the relative error is bounded by ξ. For exact LU factorization, the Gohberg-Kailath-Olshevsky (GKO) exploits the displacement structure to compute the in O(n^2) time, a significant improvement over general dense methods. The proceeds by updating low-rank generators during with partial pivoting, preserving the structure throughout. This method applies directly to Cauchy matrices as a special case of displacement-rank-one matrices. Solving linear systems involving Cauchy matrices leverages the above factorizations or explicit forms. The GKO-based LU factorization allows exact solution in O(n^2) time via forward and backward substitution, while using the explicit inverse formula also requires O(n^2) operations to compute the solution as a matrix-vector product. However, Cauchy matrices are often ill-conditioned, with condition numbers growing exponentially in n, leading to numerical instability in floating-point arithmetic for n ≫ 10; approximate methods like FMM-based solvers achieve O(n log^3 n) time with controlled error but may amplify perturbations. Matrix inversion can be computed directly from the explicit formula in O(n^2) time, as it involves products of the poles that can be precomputed in O(n) and then used to fill the inverse entries. Alternatively, fast structured inversion via displacement operators uses rank-1 updates to the generators, yielding O(n^2) complexity while maintaining the Cauchy-like structure of the inverse. These methods ensure efficiency but inherit the ill-conditioning of the original matrix.

Applications in Numerical Analysis

Cauchy matrices play a significant role in and problems within . They naturally emerge in the formulation of error matrices for Lagrange interpolation at distinct points, where the entries correspond to s that facilitate the computation of interpolation errors. Furthermore, in , Cauchy matrices underpin efficient algorithms for multipoint and interpolation of polynomials, enabling stable and fast computations even for high-degree approximations. In the context of integral equations, the —a prominent special case of the Cauchy matrix—arises as the of the Fredholm integral operator with K(x,y) = \frac{1}{x+y}, commonly used to model problems in and boundary value problems. This matrix facilitates numerical solutions to such Fredholm equations of the second kind through or Galerkin methods, where the system's ill-conditioning highlights the need for regularization techniques. Similarly, variants of Cauchy matrices appear in of integral equations, supporting iterative solvers for time-dependent problems like those in or heat conduction. Cauchy matrices find applications in , particularly through their connection to multipole expansions in the (FMM). This method exploits the Cauchy integral formula to accelerate the summation of interactions in kernel-based computations, such as those arising in electromagnetic simulations or propagation, where fast matrix-vector products reduce complexity from O(N^2) to O(N). In , these expansions enable efficient approximation of operations for digital filters, improving performance in real-time signal analysis. Over finite fields, Cauchy matrices are utilized in to construct maximum distance separable (MDS) codes, which achieve optimal error-correcting capabilities for given code lengths and dimensions. These matrices serve as or parity-check matrices in lightweight MDS constructions, particularly over fields of 2, supporting applications in and where efficient encoding and decoding are essential. The invertible nature and explicit formulas of Cauchy matrices ensure the MDS property, enabling correction of multiple symbol errors. A key challenge in numerical applications of Cauchy matrices is their potential ill-conditioning, exemplified by the Hilbert matrix, whose 2-norm condition number grows asymptotically as \kappa_2(H_n) \sim e^{3.5n}. This exponential growth, even for modest dimensions, amplifies rounding errors in standard linear algebra routines, underscoring the importance of structure-exploiting algorithms to maintain accuracy in practical computations.

References

  1. [1]
    [PDF] The entry sum of the inverse Cauchy matrix
    Sep 15, 2022 · This matrix C is known as the Cauchy matrix, and has been studied for 180 years2. ... [1] Augustin-Louis Cauchy, Memoire sur les fonctions ...
  2. [2]
    Cauchy Determinant Formula | Tom Alberts -- University of Utah
    Cauchy Determinant Formula. Main Result. This note is about Cauchy matrices of ... Some of the argument is already on the Wikipedia page for Cauchy matrices.Missing: mathematics | Show results with:mathematics
  3. [3]
    [PDF] Matrix-vector Product for Confluent Cauchy-like Matrices with ...
    It is immediate to see that for a Cauchy matrix [1/(x -y )], the matrix A C -CA is the all-one matrix, and hence rank(A C -CA ) = 1. This observation is used to ...
  4. [4]
    Notes on Hilbert and Cauchy matrices
    ### Summary of Cauchy Matrix Information
  5. [5]
    [PDF] Cauchy Pairs and Cauchy Matrices - arXiv
    Oct 8, 2014 · We recall the notion of a Cauchy matrix. Definition 1.1. A matrix C ∈ MatX(K) is called Cauchy whenever there exist mutually distinct scalars ...
  6. [6]
    What is known about the spectrum of a Cauchy matrix? - MathOverflow
    Aug 3, 2013 · A Cauchy matrix is an m-by-n matrix A whose elements have the form ai,j=1xi−yj, with xi≠yj for all (i,j), and the xi's and yi's belong to a ...
  7. [7]
    CauchyMatrix - Wolfram Language Documentation
    CauchyMatrix[x,y] represents the Cauchy matrix given by the generating vectors x and y as a structured array. CauchyMatrix[x]
  8. [8]
    Cauchy pairs and Cauchy matrices - ScienceDirect
    Let Mat X ( K ) denote the K -algebra consisting of the matrices with entries in K and rows and columns indexed by X . We recall the notion of a Cauchy matrix.
  9. [9]
    [PDF] arXiv:1907.08616v1 [math.CO] 19 Jul 2019
    Jul 19, 2019 · In 1841, Augustin Louis Cauchy introduced a certain type of matrices ... The determinant of a Cauchy's matrix is known as Cauchy's determinant in.
  10. [10]
    [PDF] Explicit Codes Minimizing Repair Bandwidth for Distributed Storage
    Sep 5, 2009 · Any submatrix of a Cauchy matrix is full rank. The minimum field size required for the construction of this. Cauchy matrix is: q ≥ α + n ...
  11. [11]
    [PDF] On Block Security of Regenerating Codes at the MBR Point ... - arXiv
    Feb 18, 2014 · A Cauchy matrix has a special property that any submatrix is again a Cauchy matrix. It is well known that any square Cauchy matrix is invertible ...
  12. [12]
    [PDF] arXiv:1405.6363v2 [math.SP] 28 May 2014
    May 28, 2014 · The Cauchy matrix has been studied and applied in algorithm ... Hadamard product of finitely many positive semi-definite (positive definite ...
  13. [13]
  14. [14]
    Cauchy matrix - PlanetMath
    Mar 22, 2013 · A square Cauchy matrix is non-singular. Any submatrix Mathworld Planetmath of a rectangular Cauchy matrix has full rank.
  15. [15]
    [PDF] from cauchy's determinant formula to bosonic and fermionic ...
    Abstract. Cauchy's determinant formula (1841) involving det((1−uivj )−1) is a fundamen- tal result in symmetric function theory.
  16. [16]
    Displacement Structure: Theory and Applications | SIAM Review
    The new displacement structure is used to obtain a generalized Schur algorithm for fast triangular and orthogonal factorizations of all such matrices and well- ...
  17. [17]
    Orthogonal Cauchy-like matrices | Numerical Algorithms
    Sep 5, 2022 · On the other hand, an invertible matrix X is orthogonal if and only if it fulfills the identity XTX = I, which boils down to about n2/2 ...
  18. [18]
    On a property of Cauchy-like matrices - ScienceDirect.com
    We introduce in this Note a property connecting the generators of a Cauchy-like matrix to the generators of its Schur complements.Missing: determinant | Show results with:determinant
  19. [19]
    [PDF] arXiv:2411.00187v1 [math.FA] 31 Oct 2024
    Oct 31, 2024 · It turns out that Hilbert's matrix is strongly ill-conditioned and that even its finite n × n sections, denoted by Hn, “almost fail” to be ...
  20. [20]
  21. [21]
    [PDF] Transformations of Matrix Structures Work Again - arXiv
    Mar 2, 2013 · ). Equation (3) expresses a Cauchy matrix Cs,t through the Vandermonde matrix Vs, the inverse. V −1 t of the Vandermonde matrix Vt, the ...
  22. [22]
    [PDF] Superregular matrices over small finite fields - arXiv
    Aug 1, 2020 · For instance, Cauchy matrices are full superregular and can be used to build the so-called Reed-Solomon block codes.
  23. [23]
    [PDF] arXiv:math/9912128v1 [math.RA] 15 Dec 1999
    Dec 15, 1999 · Introduction. A matrix is totally positive (resp. totally nonnegative) if all its minors are pos- itive (resp. nonnegative) real numbers.
  24. [24]
    [PDF] arXiv:1811.08406v1 [math.NA] 20 Nov 2018
    Nov 20, 2018 · Hilbert matrix is a special case of Cauchy matrix with generic entries cij = 1/(xi − yj). The condition number of this matrix is κ2(A)=1.6e ...
  25. [25]
    inversion formulas and fast algorithms - ScienceDirect
    In this paper, we use another way called displacement structure approach to deal with matrices of this kind. We show that the Cauchy and Cauchy–Vandermonde ...
  26. [26]
    [PDF] Fast Approximate Computations with Cauchy Matrices and ... - arXiv
    Apr 17, 2017 · We mostly work with Vandermonde and Cauchy matrices and next recall some of their basic properties. The scalars s0,...,sm−1,t0,...,tn−1 ...
  27. [27]
  28. [28]
    [PDF] arXiv:1005.0671v1 [math.NA] 5 May 2010
    May 5, 2010 · This observation leads to the GKO-Cauchy algorithm of Gohberg, Kailath and Olshevsky [33] for fast factorization of Cauchy-type matrices with ...
  29. [29]
    Pivoting and backward stability of fast algorithms for solving Cauchy ...
    This allows us to use the proposed algorithms for Cauchy matrices for rapid and accurate solution of Vandermonde and Chebyshev–Vandermonde linear systems.Missing: instability | Show results with:instability
  30. [30]
    Interpolation by Cauchy–Vandermonde systems and applications
    Cauchy–Vandermonde systems consist of rational functions with prescribed poles. They are complex ECT-systems allowing Hermite interpolation for any ...
  31. [31]
    Numerical Solution For Fredholm Integral Equation With Hilbert Kernel
    Here, the Fredholm integral equation with Hilbert kernel is solved numerically, using two different methods. Also the error, in each case, is estimated.
  32. [32]
    On the numerical solutions of Fredholm–Volterra integral equation
    Toeplitz matrix method and the product Nystrom method are described for mixed Fredholm–Volterra singular integral equation of the second kind.
  33. [33]
    Cauchy Fast Multipole Method for General Analytic Kernels - SIAM.org
    The fast multipole method (FMM) is a technique allowing the fast calculation of long-range interactions between 𝑁 points in O ⁡ ( 𝑁 ) or O ⁡ ( 𝑁 ⁢ l o g ⁡ 𝑁 ) ...
  34. [34]
    Fast Multipole Method Using the Cauchy Integral Formula
    The fast multipole method (FMM) is a technique allowing the fast calculation of long-range interactions between N points in O(N) or O(NlnN) steps with some ...
  35. [35]
    Involutory-Multiple-Lightweight MDS Matrices based on Cauchy ...
    In this paper, by using some extensions of Cauchy matrices, we introduce several new forms of MDS matrices over finite fields of characteristic 2. A known ...
  36. [36]
    ELEMENTS OF MDS CODES VIA EXTENDED CAUCHY MATRICES
    In most cases, error patterns with slightly more than D/2 error can be corrected by an (N, K) maximum- distance separable (MDS) code. Earlier the complexity of ...
  37. [37]
    [PDF] Four Cholesky Factors of Hilbert Matrices and their Inverses
    Aug 26, 2011 · Condition numbers of Hilbert matrices HN,K are known to grow rapidly (exponentially) with their dimension N. They serve as test data for inv, ...