Fact-checked by Grok 2 weeks ago

Hankel matrix

A Hankel matrix is a square in which each ascending skew-diagonal from left to right is constant, meaning the entry in the ith row and jth column depends only on the sum i + j. Named after the mathematician Hermann Hankel, who introduced the concept in his 1861 dissertation, these matrices can be finite or infinite-dimensional and are also known as persymmetric or orthosymmetric matrices. Hankel matrices exhibit several notable algebraic properties, including finite rank when their entries arise from the power series coefficients of a , as established by Kronecker in 1881. They are if the entries represent moments of a positive measure on the real line, a result due to in 1920. Boundedness on the \ell^2 holds if the is bounded on the unit circle, per Nehari's from 1957. A classic example is the , with entries $1/(i + j - 1), which is bounded with \pi. In applications, Hankel matrices play a central role in , where Hankel operators facilitate solutions to problems like the Nevanlinna–Pick and Nehari problems. They are essential in approximation theory for uniform rational approximation via singular value analysis. In , low-rank Hankel matrix minimization enables and realization from input-output data. In , they support tasks such as denoising through rank reduction of the signal's Hankel matrix and spectral super-resolution via eigenvalue decomposition.

Definition and Construction

Formal Definition

A Hankel matrix, named after the German mathematician Hermann Hankel (1839–1873), is a square or rectangular matrix in which the entries are constant along each ascending skew-diagonal from left to right. For an n \times n Hankel matrix H, the general form is given by H_{i,j} = a_{i+j-1}, \quad i,j = 1, 2, \dots, n, where \{a_k\}_{k=1}^{2n-1} is a fixed sequence of entries. This construction extends naturally to rectangular m \times n Hankel matrices (with m \leq n, say), where H_{i,j} = a_{i+j-1}, \quad i=1,\dots,m, \quad j=1,\dots,n, drawing from a sequence \{a_k\}_{k=1}^{m+n-1}. Hankel's original work leading to this matrix structure arose in his 1861 doctoral thesis on a special class of symmetric determinants, studied in connection with continued fractions.

Examples and Construction Methods

A Hankel matrix of n can be constructed from a given a_0, a_1, \dots, a_{2n-2} by placing the element a_{i+j} in the (i,j)-th , using 0-based indexing, which ensures constant values along each anti-diagonal. For instance, consider the [1, 2, 3, 2, 1, 0] to form a 3×3 Hankel matrix: \begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 2 \\ 3 & 2 & 1 \end{pmatrix} This placement follows directly from the rule, with the first row as a_0, a_1, a_2, the second as a_1, a_2, a_3, and the third as a_2, a_3, a_4. The serves as a classic example of a Hankel matrix, defined entrywise by H_{i,j} = \frac{1}{i+j-1} for i,j = 1, \dots, n, using 1-based indexing. For n=3, it takes the form: \begin{pmatrix} 1 & \frac{1}{2} & \frac{1}{3} \\ \frac{1}{2} & \frac{1}{3} & \frac{1}{4} \\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} \end{pmatrix} This is notoriously ill-conditioned, with its growing exponentially as n increases, making it challenging for numerical computations. In practical applications, Hankel matrices are often generated from data sequences such as moments in , where the matrix entries are populated with successive moments of a to form a block-Hankel structure for and model realization. Similarly, in discrete , they can be built from functions by assembling block-Hankel matrices H_y and H_u from estimates R_{yr}(\tau) over time offsets \tau, facilitating identification methods like zero-space .

Algebraic and Analytic Properties

Structural Properties

Square Hankel matrices are symmetric by construction, since the entry in position (i, j) equals the entry in position (j, i) for all i, j, as both depend on the sum i + j of the generating indices. This holds for any generating sequence {a_k}, making the matrix Hermitian if the sequence is real-valued. A square Hankel matrix becomes centrosymmetric—symmetric with respect to both the and the center—if the generating sequence satisfies a_k = a_{2n-2-k} for an n × n matrix, ensuring h_{i,j} = h_{n+1-i, n+1-j}. Hankel matrices exhibit a close algebraic relation to Toeplitz matrices through the (or ) matrix J, which has ones on its anti-diagonal and zeros elsewhere, defined as J_{i,j} = \delta_{i, n+1-j}. Specifically, any Hankel matrix H can be expressed as H = J T J, where T is a with appropriately chosen entries from the generating sequence. This transformation highlights the structural duality between the two classes: reversing the rows (or columns) of a Hankel matrix yields a Toeplitz matrix, and vice versa, facilitating shared algorithmic techniques for computation and analysis. Hankel matrices are persymmetric when symmetric across the anti-diagonal, meaning h_{i,j} = h_{n+1-j, n+1-i} for an n × n ; this property holds if the generating is symmetric in the sense a_k = a_{2n-2-k}, aligning the constant skew-diagonals with the required . In general, this persymmetry underscores the matrix's invariance under reversal operations along the anti-diagonal, a feature that preserves the constant skew-diagonal structure under such transformations. The of a Hankel matrix constructed from the coefficients of a is termed a catalecticant, and it bears a direct relation to algebraic s. For instance, the of the Hankel matrix formed by the moments or coefficients of two s equals (up to a scalar) the of those polynomials, providing a tool for detecting common or assessing polynomial independence. This connection is foundational in and , where catalecticant determinants quantify syzygies and ideal memberships.

Spectral Properties

The eigenvalues of a Hankel matrix generated by a totally positive exhibit interlacing properties with respect to those of its principal submatrices. Specifically, if the generating \{a_k\}_{k=0}^\infty is totally positive (meaning all finite Hankel submatrices have positive determinants), the resulting n \times n Hankel matrix H_n with entries h_{i,j} = a_{i+j} (indices starting from 0) is a . For totally positive matrices, the eigenvalues \lambda_1^{(n)} \leq \cdots \leq \lambda_n^{(n)} of H_n satisfy the interlacing inequalities \lambda_k^{(n-1)} \leq \lambda_k^{(n)} \leq \lambda_{k+1}^{(n-1)} for k = 1, \dots, n-1 with the leading principal submatrix of order n-1. The spectral norm of a Hankel matrix is bounded above by its maximum row sum norm, which depends on the generating sequence. For a positive Hankel matrix, all eigenvalues lie within the interval [\sigma, \Sigma], where \sigma is the minimum row sum and \Sigma is the maximum row sum; thus, the spectral norm satisfies \|H\|_2 \leq \Sigma. Under the additional condition that the sequence is strictly increasing (a_{k+1} > a_k), the eigenvalues cluster into two disjoint intervals separated by a gap, providing sharper localization. The Frobenius norm is given by \|H_n\|_F = \sqrt{\sum_{k=0}^{2n-2} m_k |a_k|^2}, where m_k = \min(k+1, 2n-1-k, n) is the multiplicity of a_k in H_n. Hankel matrices constructed from moment sequences of smooth densities, such as those associated with exponential weights W(t) = \exp(-Q(t)) where Q is a polynomial, display clustered eigenvalues leading to large numbers. The largest eigenvalue grows at most polynomially in n, while the smallest eigenvalue decays exponentially fast to zero, resulting in a \kappa_2(H_n) that grows exponentially with n. For example, in the case of weights on [0,1] or [-1,1], explicit rates show \kappa_2(H_n) \asymp n^{2\alpha+1} 2^{c n} for suitable constants \alpha, c > 0 depending on the degree of Q. This ill-conditioning is characteristic of Hankel matrices approximating continuous measures with analytic densities.

Special Cases

Constant Hankel matrices, where every entry is identical, say equal to a constant c, form a special subclass with a rank-1 structure. Such a matrix can be expressed as the outer product c \mathbf{1} \mathbf{1}^T, where \mathbf{1} is the all-ones , confirming its low rank and preservation of the Hankel form since all anti-diagonals are constant. This property arises because the generating sequence is constant, leading to uniform entries across the . Hankel matrices of Markov type, often arising in from Markov parameters of linear systems, exhibit finite rank when generated by . According to Kronecker's theorem, an infinite Hankel matrix has finite rank n if and only if its symbol (the ) is a proper of degree n, such as f(z) = p(z)/q(z) with coprime polynomials p and q of degrees at most n. This finite-rank property distinguishes them from general Hankel matrices and enables compact representations in applications. Idempotent Hankel matrices satisfy H^2 = H, a that positions them as operators onto their column space in the underlying . For finite-dimensional cases, the entries must align such that the matrix's structure enforces this , often requiring specific patterns in the generating sequence to ensure the product yields the original matrix. In sequence spaces like \ell^2, such matrices correspond to orthogonal s onto finite-dimensional subspaces invariant under the , though explicit constructions are constrained by the Hankel . Characterizations of their sign patterns reveal that idempotent Hankel matrices typically have non-positive entries on certain anti-diagonals to maintain the projection property. Hankel matrices possess a low displacement rank, typically at most 2, defined as the rank of the displacement H - Z H Z^T, where Z is the shift matrix (downward shift for rows and rightward for columns). This low displacement rank, differing from the higher ranks in unstructured matrices, facilitates fast algorithms for inversion and factorization. Notably, it underpins O(n^2) methods analogous to the Levinson-Durbin algorithm for Toeplitz systems, enabling efficient solutions to linear systems involving Hankel coefficients through displacement-friendly operations like the Schur algorithm.

Hankel Operators

Definition and Matrix Representation

A Hankel operator is defined on the H^2 of the unit disk, which consists of analytic functions on the open unit disk with square-integrable boundary values on the unit circle \mathbb{T}. Given a symbol \phi \in L^\infty(\mathbb{T}), the Hankel operator \Gamma_\phi: H^2 \to (H^2)^\perp is given by \Gamma_\phi f = P_- (\phi f), where P_- is the orthogonal projection from L^2(\mathbb{T}) onto the (H^2)^\perp of H^2 in L^2(\mathbb{T}). This formulation captures the operator's action of multiplying by the symbol and projecting onto the anti-analytic component. In the standard \{z^k\}_{k=0}^\infty for H^2, the infinite of \Gamma_\phi has entries h_{i,j} = \hat{\phi}(i+j) for i,j \geq 0, where \hat{\phi}(n) denotes the nth coefficient of \phi, specifically the coefficient of z^{-n} in the expansion of \phi on \mathbb{T}. This results in a Hankel matrix where each skew-diagonal is constant, reflecting the operator's structure derived from the symbol's coefficients. Finite sections of the Hankel operator are obtained by truncating the infinite matrix to its top-left n \times n submatrix, which approximates the action of \Gamma_\phi on finite-dimensional subspaces spanned by \{1, z, \dots, z^{n-1}\}. These finite-rank approximations are useful for and converge to the full operator as n \to \infty under appropriate conditions on \phi. Kronecker's theorem states that a Hankel operator \Gamma_\phi has finite rank if and only if the anti-analytic part P_- \phi is a rational function, in which case the rank equals the degree of P_- \phi (or the degree of the denominator in its reduced form). This characterization, originally from 1881, highlights the connection between the symbol's analytic structure and the operator's dimensionality.

Analytic Properties

Hankel operators with analytic symbols \phi, defined as \Gamma_\phi f = P_+ (\phi \tilde{f}) for f \in H^2, where P_+ is the orthogonal onto H^2 and \tilde{f}(e^{i\theta}) = f(e^{-i\theta}), exhibit key analytic properties rooted in on the unit disk. The boundedness of \Gamma_\phi on H^2 is characterized precisely by the membership of the symbol \phi in the space BMOA of analytic functions of , where BMOA consists of the analytic functions whose boundary values have on the unit circle. Moreover, the satisfies \|\Gamma_\phi\| = \mathrm{dist}(\phi, H^\infty), the distance from \phi to the space of bounded analytic functions, as established by Nehari's theorem. This equivalence highlights the intimate connection between the operator's boundedness and the oscillatory behavior of \phi's boundary values. Compactness of the Hankel operator \Gamma_\phi further refines this condition: \Gamma_\phi is compact on H^2 if and only if \phi belongs to VMOA, the of the analytic polynomials in the BMOA , also known as the space of analytic functions of vanishing mean oscillation. This captures symbols whose boundary oscillations diminish in a controlled manner, leading to operators approximable by finite-rank ones. The result extends the work of Hartman on compactness criteria. The spectral properties of \Gamma_\phi are equally tied to the symbol's boundary behavior. The essential spectrum \sigma_\mathrm{ess}(\Gamma_\phi) coincides with the essential range of \phi on the unit , the set of \lambda \in \mathbb{C} such that the preimage under \phi has positive measure for every neighborhood of \lambda. Nehari's theorem reinforces this by linking the essential norm \|\Gamma_\phi\|_\mathrm{ess} to the distance from \phi to H^\infty, providing a quantitative measure of the 's "non-analytic" part. For Fredholm properties, when \phi \in H^\infty + C(\mathbb{T}) with finite , \Gamma_\phi is Fredholm, and its \mathrm{ind}(\Gamma_\phi) = -\mathrm{wind}(\phi, 0), where \mathrm{wind}(\phi, 0) denotes the winding number of \phi around 0 along the unit . This index theory underscores the topological of the symbol in determining the operator's and dimensions.

Hankel Matrix Transform

Definition

In , the Hankel matrix transform (also known as the Hankel determinant transform) of a \{a_n\}_{n=0}^\infty is defined as the \{\Delta_n\}_{n=1}^\infty, where each \Delta_n is the of the n \times n Hankel matrix H_n formed by the entries. Specifically, the (i,j)-th entry of H_n (with indices starting from 0) is (H_n)_{i,j} = a_{i+j} for i,j = 0, 1, \dots, n-1, so the matrix utilizes terms from a_0 up to a_{2n-2}. Thus, \Delta_n = \det(H_n). This mapping from to their associated determinant encapsulates structural information about the original in a compact form. The concept of the Hankel transform was formally introduced by Layman in 2001 as a tool for analyzing integer sequences through their determinants, though evaluations of such Hankel determinants date back to earlier combinatorial investigations. It maintains deep connections to continued fractions, where the transform aids in deriving explicit forms for sequence representations, and to orthogonal polynomials, in which the determinants appear in functionals and recurrence relations for the polynomials. A representative example arises with the exponential (geometric) sequence a_k = r^k for some constant r \neq 0. Here, the Hankel matrix H_n takes the form (H_n)_{i,j} = r^{i+j} = (r^i)(r^j), which is the outer product of the vectors (1, r, \dots, r^{n-1}) and itself, resulting in a rank-1 matrix for n > 1. Consequently, \Delta_n = 0 for n > 1, while \Delta_1 = a_0 = 1 (normalizing r^0 = 1); this follows directly from the property that the determinant of a rank-deficient matrix vanishes. Notably, the \{\Delta_n\} exhibits invariance under the of \{a_n\}, meaning that applying the b_n = \sum_{k=0}^n \binom{n}{k} a_k to the original yields the same Hankel transform as \{a_n\}. This property underscores the transform's robustness to certain modifications.

Key Properties

The Hankel matrix transform of a \{a_k\}_{k=0}^\infty produces the of its Hankel determinants \Delta_n = \det((a_{i+j})_{0 \leq i,j \leq n-1}), which encode key structural information about the original . When the \{a_k\} arises as the moments of a positive measure \mu on the real line, i.e., a_k = \int x^k \, d\mu(x), the Hankel determinants admit an explicit expression in terms of the associated orthogonal s \{p_k\}. Specifically, \Delta_n = \prod_{k=0}^{n-1} h_k, where h_k = \int p_k(x)^2 \, d\mu(x) denotes the squared L^2(\mu)-norm of the monic orthogonal p_k of k. This relation, known as the Hankel determinant , underscores the deep interplay between Hankel matrices and orthogonal theory, facilitating computations and asymptotic analyses for moment . For positive definite sequences—those for which all leading Hankel matrices are positive definite, corresponding to positive measures \mu with compact support [- \rho, \rho]—the determinants exhibit a precise asymptotic behavior as n \to \infty. In particular, \log |\Delta_n| \sim n^2 \log \rho + n^2 \log(1/2) + o(n^2), where \rho is half the length of the support interval (reflecting the logarithmic capacity of [- \rho, \rho], which equals \rho/2). This quadratic growth captures the global scaling of the moment sequence dictated by the support's extent, with the leading term arising from the equilibrium measure in logarithmic potential theory. A notable invariance of the Hankel transform arises under the binomial transform of the sequence, defined by a_k = \sum_{j=0}^k \binom{k}{j} b_j. The determinants remain unchanged: \Delta_n(\{a_k\}) = \Delta_n(\{b_k\}). To see this, consider the infinite Hankel matrix H = (a_{i+j}) and its binomial counterpart; the binomial transform corresponds to a lower triangular Pascal matrix P with entries p_{ij} = \binom{i}{j} for i \geq j and 0 otherwise, satisfying P^{-1} = P(-1) where entries are (-1)^{i-j} \binom{i}{j}. The finite n \times n section satisfies H_n = L D U, but under the transform, the new matrix is H_n' = P_n H_n P_n^T, where P_n is the n \times n principal submatrix of P; since \det(P_n) = 1, it follows that \det(H_n') = \det(H_n). This invariance extends to the full transform sequence and holds analogously for the inverse binomial transform.

Approximations and Decompositions

Approximation Techniques

Approximation techniques for Hankel matrices and operators focus on bounding the error in finite-dimensional approximations of infinite or large structures, particularly in the spectral norm. Nehari's theorem provides a foundational bound for the spectral norm of a Hankel operator \Gamma_\phi associated with a symbol \phi \in L^\infty(\mathbb{T}), stating that \|\Gamma_\phi\| = \inf \{\|\phi - f\|_\infty : f \in [H^\infty](/page/Infinite)\}, where H^\infty is the space of bounded analytic functions on the unit disk. This equates the operator to the best uniform approximation of \phi by analytic functions. For finite sections \Gamma_n of \Gamma_\phi, which are n \times n Hankel matrices, the approximation error satisfies \|\Gamma_\phi - \Gamma_n\| \leq \inf \{\|\phi - q\|_\infty : q \text{ [polynomial](/page/Polynomial) of degree at most } n-1\}, leveraging the subspace of H^\infty to control the discrepancy between the infinite operator and its truncation. The Adamyan-Arov-Krein (AAK) theory extends this framework to optimal finite-rank approximations in the Hankel norm. For a compact Hankel \Gamma, the theory guarantees the existence of a -m \Gamma_m such that \|\Gamma - \Gamma_m\| = s_m(\Gamma), where s_m(\Gamma) is the m-th Hankel , defined as the m-th number. Equivalently, for the symbol \phi, there exists a \psi of McMillan at most m achieving \|\phi - \psi\|_\infty = s_m(\Gamma_\phi), making this the best uniform by rationals of bounded . This optimal is unique under generic conditions on the s and characterizes the Hankel s as the key quantities for error minimization. Greedy algorithms offer practical iterative methods for low-rank approximations of large Hankel matrices, particularly in structured settings like . These algorithms proceed by successive rank-one updates, at each step selecting a basis that maximally reduces the in the Frobenius or while preserving the Hankel through affine constraints on the entries. For an initial low-rank Hankel matrix, the greedy step solves a weighted least-squares problem to find the optimal rank-one correction, followed by onto the manifold of Hankel matrices of fixed . Such methods are computationally efficient for high-dimensional , exploiting the of Hankel matrices to avoid full matrix factorizations. Error estimates for these approximations depend on the regularity of the symbol \phi. For analytic symbols in the class (functions with absolutely summable coefficients), the Schmidt pairs (singular vectors) of the truncated \Gamma_n converge to those of \Gamma_\phi in the \ell^2-norm at a rate governed by the decay of the coefficients, often exponentially fast due to the analyticity. These rates ensure that finite approximations capture the essential spectral behavior of the infinite Hankel operator as n increases.

Decomposition Methods

Hankel matrices admit a () in which the singular vectors often possess structured forms, such as Vandermonde polynomials, especially when the matrix arises from sums of exponentials or has low . This structure enables fast algorithms that exploit the low of Hankel matrices, typically at most twice the matrix rank, to reduce the SVD computation to operations on smaller unstructured matrices. A notable approach is the O(n² log n) algorithm by Luk and Qiao, which first applies a structure-preserving Lanczos method to tridiagonalize the associated and then computes the singular values using a divide-and-conquer strategy, avoiding the O(n³) cost of standard . For low-rank Hankel matrices, rank-revealing QR (RRQR) factorizations provide an efficient way to identify the numerical rank by selecting a that separates the leading r singular values in the triangular . Structure-preserving variants maintain the displacement structure in the Q and R factors. This is particularly useful for revealing the effective rank in noisy or approximate low-rank settings, where the leading block of R approximates the SVD's leading singular values up to a small . In the case of symmetric positive semidefinite Hankel matrices, the Takagi factorization provides a specialized H = U D Uᵀ, where U is unitary and D is diagonal with non-negative entries representing the singular values. This factorization, a symmetric variant of the , preserves the centrosymmetric structure inherent to symmetric Hankel matrices and can be computed efficiently in O(n² log n) time using adapted symmetric tridiagonal eigensolvers that exploit the displacement rank. For instances, the diagonal D directly gives the eigenvalues, and the columns of U correspond to structured eigenvectors suitable for applications like moment matching. Hankel matrices generated by exponential sums admit a Vandermonde decomposition H = Vᵀ D V, where V is a whose nodes are the exponential frequencies, and D is a of coefficients, revealing the low-rank structure through the number of distinct nodes. This can be obtained by solving an eigenvalue problem on a shifted submatrix of H, as the eigenvalues of the companion-like matrix formed from the Hankel structure yield the nodes for the Vandermonde factors, enabling exact fits for noise-free data. Such eigenvalue-based methods are foundational for parameter estimation in exponential models, with the rank of H equaling the number of exponentials.

Applications

Moment Problems

The Hamburger moment problem seeks to determine whether a sequence \{ m_k \}_{k=0}^\infty of real numbers can be represented as the moments m_k = \int_{-\infty}^\infty t^k \, d\mu(t) of some positive Borel measure \mu on the real line \mathbb{R}. This representation exists if and only if the Hankel matrices H_n(m), defined by (H_n(m))_{i,j} = m_{i+j} for i,j = 0, \dots, n, are positive semidefinite for all n \in \mathbb{N}_0. The positive semidefiniteness ensures the existence of an inner product under which the monomials form a pre-Hilbert space, allowing the measure to be recovered via the Riesz representation theorem. The Stieltjes moment problem extends this framework to measures supported on the non-negative half-line [0, \infty). For the sequence \{ m_k \} to correspond to such a measure, both the standard Hankel matrices H_n(m) and the shifted Hankel matrices H_n^{(1)}(m), with entries (H_n^{(1)}(m))_{i,j} = m_{i+j+1}, must be positive semidefinite for every n. This dual condition accounts for the restricted support, ensuring the moments align with a distribution bounded below by zero, and distinguishes the problem from the Hamburger case by preventing measures with negative mass contributions. Hankel matrices also underpin the method of moments for constructing orthogonal polynomials from a given moment sequence. When the matrices H_n(m) are positive definite, the Hankel determinants \Delta_n = \det H_n(m) yield the coefficients of the orthogonal polynomials via explicit formulas; for instance, the leading coefficient of the orthonormal polynomial of degree n is $1 / \sqrt{\Delta_n / \Delta_{n-1}}, and the three-term recurrence coefficients are derived from ratios of consecutive determinants. This technique, rooted in Favard's theorem, guarantees that polynomials satisfying a three-term recurrence with positive coefficients are orthogonal with respect to the measure defined by the moments precisely when the associated Hankel matrices are positive definite. Uniqueness of the representing measure in the Hamburger moment problem is addressed by Carleman's condition, a sufficient for . If \sum_{n=1}^\infty \Delta_n^{-1/(2n)} = \infty, where \Delta_n = \det H_n(m) > 0 are the Hankel determinants, then the measure \mu is uniquely determined by the moments, as the polynomials are dense in the L^2(\mu) space. This divergence condition bounds the of the moments, preventing indeterminate cases where multiple measures share the same .

System Identification

In , Hankel matrices play a central role in constructing state-space realizations of linear time-invariant (LTI) systems from input-output data, particularly through the Ho-Kalman algorithm. This method forms a Hankel matrix from the sequence, or Markov parameters, of the system, where each ascending skew-diagonal contains identical entries representing the system's response at successive time lags. The (SVD) of this Hankel matrix then reveals the minimal system order and enables the extraction of the state-space matrices A, B, C, and D by factoring the matrix into and components, ensuring a balanced realization that minimizes the state dimension while capturing the essential dynamics. The algorithm assumes noiseless data and sufficient excitation, providing a canonical minimal realization that is unique up to similarity transformations. Subspace identification methods extend this framework to handle noisy data and multivariable systems by constructing block Hankel matrices from past and future input-output trajectories. In the N4SID (Numerical algorithms for Subspace State Space System IDentification) approach, the Hankel matrix of output data is combined with instrumental variables to project onto the orthogonal complement of noise, yielding estimates of the extended observability and controllability matrices via SVD; these are then used to recover the state-space model through least-squares fitting. Similarly, the MOESP (Multivariable Output-Error State sPace) method employs QR decomposition on the input-output Hankel blocks to isolate the deterministic subspace, separating noise effects and directly estimating the system matrices without explicit state-sequence computation. Both techniques leverage the structured low-rank property of the Hankel matrix to achieve consistent identification under mild persistence-of-excitation conditions.90230-5)90229-1) The singular values of the Hankel matrix provide a robust means for order estimation in these methods, as the number of significant singular values corresponds to the true system , with subsequent values dropping sharply due to or redundancy. This spectral gap in the Hankel singular value spectrum indicates the of the underlying dynamics, allowing truncation in the to obtain a minimal-order model while discarding insignificant modes; for instance, in applications, this approach identifies the number of dominant poles accurately even in the presence of measurement . Post-2010 developments have extended Hankel-based to nonlinear systems via embeddings, where the nonlinear are lifted into a (RKHS) to linearize the behavior, and Hankel matrices constructed from delay embeddings approximate the Koopman operator for data-driven realization. These methods, often combined with extended , form Hankel structures from nonlinear observables to estimate dimensions and system trajectories, enabling subspace-like techniques for black-box nonlinear without explicit model assumptions. High-impact contributions include sparse regressions that enhance robustness to and dimensionality in chaotic systems.

Signal Processing

In signal processing, Hankel matrices constructed from the autocorrelation functions of stationary signals play a key role in subspace identification methods, facilitating autoregressive (AR) modeling through extensions of the Yule-Walker equations that leverage the structured low-rank properties of these matrices for parameter estimation. This approach is particularly useful for analyzing wide-sense stationary processes, where the Hankel form captures temporal correlations efficiently, enabling robust estimation of AR coefficients even in noisy environments. A prominent application is signal decomposition via Cadzow's algorithm, which denoises by exploiting the low-rank structure inherent in Hankel matrices formed from the observed signal. The method iteratively alternates between computing a using () and projecting back onto the set of Hankel matrices through row and column averaging, effectively separating signal components from noise while preserving the underlying dynamics. Originally proposed for signal enhancement, this technique has been widely adopted for applications such as seismic and spectroscopic , where it achieves superior compared to unstructured low-rank methods by enforcing the Hankel constraint. For instance, in magnetic resonance , Cadzow-based denoising of Hankel matrices from signals improves metabolite quantification by mitigating thermal noise artifacts. Prony's method represents another foundational use of Hankel matrices for fitting sums of damped exponentials to uniformly sampled signals, a common model in , audio, and biomedical applications. The algorithm constructs a Hankel matrix from consecutive signal samples, then identifies the signal's by finding the null space of this matrix or via eigenvalue , yielding the frequencies (or poles) from the roots. Amplitudes are subsequently estimated by solving a linear least-squares system involving the of the poles and the initial signal values. This process is highly sensitive to noise, prompting modern variants that incorporate truncated on the Hankel matrix to enhance stability and accuracy in parameter recovery. In the 2020s, Hankel matrices have gained renewed prominence in for and analysis, particularly through the Hankel Alternative View of the Koopman (HAVOK) operator, which transforms nonlinear time-delay embeddings into a Hankel matrix to approximate linear representations of for improved . This framework decomposes intermittent or forced systems into linear forced models, enabling applications in and signal prediction. Complementing this, integrations with , such as the RC-HAVOK algorithm, reduce the dimensionality of Koopman embeddings while maintaining predictive fidelity for complex , outperforming traditional recurrent neural networks in tasks like and .

References

  1. [1]
    Hankel Matrix -- from Wolfram MathWorld
    A square matrix with constant skew diagonals. In other words, a Hankel matrix is a matrix in which the (i,j)th entry depends only on the sum i+j.Missing: definition | Show results with:definition
  2. [2]
    Hankel operators and their applications, by Vladimir V. Peller ...
    Mar 12, 2004 · A Hankel matrix is a square complex matrix (finite or infinite) that is constant on each diagonal orthogonal to the main diagonal—its (m, n) ...
  3. [3]
    Hankel Matrix Rank Minimization with Applications to System ...
    This paper introduces a framework for nuclear norm minimization of Hankel matrices, using first-order methods for system identification and realization.
  4. [4]
  5. [5]
  6. [6]
    Hankel matrix - Encyclopedia of Mathematics
    A matrix whose entries along a parallel to the main anti-diagonal are equal, for each parallel. Equivalently, H=(hi,j) is a Hankel matrix if and only if ...
  7. [7]
    Hermann Hankel - Biography - MacTutor - University of St Andrews
    Hermann Hankel was a German mathematician who worked on the theory of complex numbers, the theory of functions and the history of mathematics. He is remembered ...
  8. [8]
    Convergence acceleration during the 20th century - ScienceDirect
    They are called Hankel determinants and were studied by Hermann Hankel (1839–1873) in his thesis in 1861 [72]. ... continued fractions, and Padé approximants.
  9. [9]
    Hankel Matrix Correlation Function‐Based Subspace Identification ...
    Apr 5, 2018 · Through the correlation function estimation and zero space followed by projection, block Hankel matrix of identification framework for filling, ...
  10. [10]
    On Hankel matrices and the symmetric nonnegative inverse ...
    It is clear that every Hankel matrix is a symmetric matrix. Thus, its eigenvalues are real numbers. Hankel matrices arise in diverse areas of mathematics such ...<|control11|><|separator|>
  11. [11]
    [PDF] On Symmetric Factorizations of Hankel Matrices - arXiv
    Jul 3, 2023 · Since Hankel matrices are symmetric, Theorem 3 does not require the positive definite condition. Our algorithm gives similar bounds and ...
  12. [12]
    Matrix Reference Manual: Special Matrices - Imperial College London
    A Hankel matrix has constant anti-diagonals. In other words a(i,j) depends only on (i+j). A Hankel matrix is symmetric. [A:Hankel] If J is the exchange ...
  13. [13]
    [PDF] arXiv:2212.09841v2 [math.NA] 29 May 2023
    May 29, 2023 · Any Hankel matrix is a Toeplitz matrix with permuted columns, i.e., H = PT for some exchange matrix P and Toeplitz matrix T. Thus, if g and ...
  14. [14]
    View of Hankel determinants of certain sequences of Bernoulli ...
    ), denoted byHn(c) orHn(ck), is defined as the determinant of theHankel matrix, orpersymmetric matrix, given by(1.1)(ci+j)0≤i,j≤n= c0c1 ...
  15. [15]
    Some Eigenvalue Properties of Persymmetric Matrices | SIAM Review
    ... definitions for per-antisymmetric and per-Hermitian matrices. This note ... Hankel matrix JT, where J is the so-called exchange matrix. Classroom Note ...<|separator|>
  16. [16]
    [PDF] arXiv:2007.03361v1 [math.QA] 7 Jul 2020
    Jul 7, 2020 · The Hankel determinant for the first K vectors 1,x,...,xK−1 has the ... interpreted as the resultant of two polynomials. Namely, the ...
  17. [17]
    A. A. Abdullin, V. N. Drozdov, A. G. Mamatov ... - Mathnet.RU
    ... catalecticant (Hankel) matrix. ... determinant of the catalecticant matrix is equal to the resultant of transfer function polynomials. ... Hankel matrix; resultant ...
  18. [18]
    Positive Hankel Matrices, Eigenvalues and Total Positivity - MDPI
    Hankel matrices are matrices constant along their antidiagonals (see Section 2). They were named after the German mathematician Hermann Hankel. They were ...
  19. [19]
    [PDF] Hankel Operators and Applications (Lecture Notes) Aleksey Kostenko
    A Hankel matrix is a (possibly infinite) matrix whose en- tries depend only on the sum of coordinates. These short notes can be.
  20. [20]
    Condition numbers of Hankel matrices for exponential weights
    We obtain the rate of growth of the largest eigenvalues and Euclidean condition numbers of the Hankel matrices ( ∫ I t j + k W 2 ( t ) d t ) j , k = 0 n for ...Missing: sequences | Show results with:sequences
  21. [21]
    [PDF] Optimal Rank-1 Hankel Approximation of Matrices
    Nov 3, 2020 · We have new numerical methods to compute the optimal rank-1. Hankel approximation of a given matrix A ∈ CM×N with regard to the Frobenius norm, ...
  22. [22]
    [PDF] The Hankel matrix rank theorem revisited
    Aug 18, 2017 · We give a new short proof of a version of a Hankel matrix rank theorem. That version expresses the rank of H by the smallest.
  23. [23]
    Idempotent sign patterns of special types - Taylor & Francis Online
    Mar 23, 2014 · In this paper, we characterize idempotent Toeplitz sign patterns and idempotent ... An n×n matrix A is called a Hankel matrix if there are a0, a1 ...<|control11|><|separator|>
  24. [24]
    [PDF] An Excursion into the Theory of Hankel Operators
    One of the first results about Hankel matrices was a theorem of Kronecker. [1881] that describes the Hankel matrices of finite rank. Let r = p/q be a rational ...
  25. [25]
    A discrete variation on Kronecker's theorem - ScienceDirect
    It is a classical therem due to Kronecker that a Hankel operator with bounded measurable symbol on the unit circle has finite rank precisely when the ...
  26. [26]
    (WEAK) COMPACTNESS OF HANKEL OPERATORS ON BMOA - jstor
    Theorem 1.1 (Nehari, for p = 2). Let 1 < p < +oo. Then Ha is bounded on Hp if and only if a G BMOA. Theorem 1.2 (Hartman, for p = 2). Let 1 < p < +oo. Then Ha ...<|control11|><|separator|>
  27. [27]
    [PDF] Hankel transform of a sequence obtained by series reversion II - arXiv
    Dec 7, 2011 · The Hankel transform of a given sequence a = (an)n∈N0 is defined as the sequence h = (hn)n∈N0 of Hankel determinants, i.e. hn = det ([ai+j]0≤i,j ...
  28. [28]
    [PDF] Combinatorial Polynomials as Moments, Hankel Transforms, and ...
    The Hankel transform of a sequence an and its binomial transform are equal. ... polynomial, moments, orthogonal polynomials, Hankel determinant, Hankel transform.
  29. [29]
    [PDF] On the Hankel transform of C-fractions - arXiv
    Dec 17, 2012 · As every power- series can be represented by a C-fraction, this gives in theory a closed form formula for the Hankel transform of any sequence.
  30. [30]
    [PDF] The Hankel determinant of exponential polynomials - Mathematics
    (x)=(n − 1) · (n − 3)···1 · xn/2 if n is even, otherwise equal to zero. i!. The first Hankel determinant in Theorem 3 was evaluated by Radoux in [4].
  31. [31]
    The k-Binomial Transforms and the Hankel Transform - ADS
    We give a new proof of the invariance of the Hankel transform under the binomial transform of a sequence. Our method of proof leads to three variations ...
  32. [32]
    [PDF] ORTHOGONAL POLYNOMIALS - OSU Math
    Feb 6, 2014 · Orthogonal polynomials are connected with trigonometric, hypergeometric,. Bessel, and elliptic functions, are related to the theory of continued ...
  33. [33]
    Szegö's theorem for Hankel determinants - AIP Publishing
    Mar 1, 1979 · The proof uses a set of recurrence formulas developed for polynomials orthogonal on a finite segment of the real line and some properties of the ...
  34. [34]
    Second Hankel determinant for close-to-convex functions
    ... expression, called the second Hankel determinant, for C 0 , i.e. the subset of C consisting of functions f that satisfy in the unit disk the inequality ...
  35. [35]
    [PDF] The Hankel Transform and Some of its Properties
    The Hankel transform of an integer sequence is defined and some of its properties discussed. It is shown that the Hankel transform of a sequence S is the ...
  36. [36]
    V. M. Adamyan, D. Z. Arov, M. G. Krein, “Analytic ... - Math-Net.Ru
    Abstract: This article is a study of infinite Hankel matrices and approximation problems connected with them. Bibliography: 22 titles. Received: 15.09.1970.
  37. [37]
    Low-rank approximation of Hankel matrices in denoising ...
    Aug 15, 2023 · In this paper, a statistical methodology to actively select the dynamic signal subspace in covariance-driven subspace identification is developed.
  38. [38]
    Rate of convergence of schmidt pairs and rational functions ...
    Apr 4, 1990 · In this paper for any compact Hankel operator Γ of the Wiener class, we derive the rate of l 2 -convergence of the Schmidt pairs of Γ n to the corresponding ...
  39. [39]
    Vandermonde Factorization of Hankel Matrix for Complex ...
    The time-domain signals are converted into a Hankel matrix, whose matrix rank (the number of nonzero singular values) is equal to the number of exponential ...
  40. [40]
    [PDF] on the singular values of matrices with - displacement structure
    Abstract. Matrices with displacement structure such as Pick, Vandermonde, and Hankel ma-. 4 trices appear in a diverse range of applications.
  41. [41]
    [PDF] Contemporary Mathematics A Fast Singular Value Algorithm for ...
    A Fast Singular Value Algorithm for Hankel Matrices. Franklin T. Luk and Sanzheng Qiao. Abstract. We present an O(n2 logn) algorithm for finding all the ...Missing: Ciccone Ortega
  42. [42]
    Structure-Preserving and Rank-Revealing QR-Factorizations
    The rank-revealing QR-factorization (RRQR factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under ...Missing: Hankel | Show results with:Hankel
  43. [43]
    [PDF] Parameter estimation for nonincreasing exponential sums by Prony ...
    Further we present a new efficient algorithm of matrix pencil factorization based on QR decomposition of a rectangular Hankel matrix. The algorithms of ...<|control11|><|separator|>
  44. [44]
    On the “Favard theorem” and its extensions - ScienceDirect
    One of the simplest characteristics of orthogonal polynomials is the so-called three-term recurrence relation (TTRR) that connects every three consecutive ...
  45. [45]
    VARMA Models with Single- or Mixed-Frequency Data - MDPI
    Linear algebra tools are extensively used to study transfer functions, Yule–Walker equations and Hankel matrices associated with identifying and estimating VAR ...
  46. [46]
    [PDF] Cadzow's basic algorithm, alternating projections and singular ...
    In our context X is a Hankel matrix formed from some signal or time series. The idea is to perturb the matrix X by a small error ΔX such that X +ΔX becomes ...
  47. [47]
    [PDF] Fast Cadzow's Algorithm and a Gradient Variant - arXiv
    Jan 9, 2020 · The Cadzow's algorithm is a signal denoising and recovery method which was designed for signals corresponding to low rank Hankel matrices. In ...
  48. [48]
    Denoising MR Spectroscopic Imaging Data With Low-Rank ... - NIH
    Denoising is performed by arranging the measured data in in appropriate matrix forms (i.e., Casorati and Hankel) and applying low-rank approximations by ...
  49. [49]
    [PDF] Exponential Data Fitting - Computational Science Research Center
    They also start with a Hankel matrix of the data, but, instead of working with the recurrence relationships, by comparing a Vandermonde and an. SV ...
  50. [50]
    Coding Prony's method in MATLAB and applying it to biomedical ...
    Nov 26, 2018 · This paper provides a tutorial on the main polynomial Prony and matrix pencil methods and their implementation in MATLAB and analyses how they perform
  51. [51]
    Chaos as an intermittently forced linear system - Nature
    May 30, 2017 · This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with ...
  52. [52]
    Model reduction of dynamical systems with a novel data-driven ...
    Aug 30, 2024 · This paper introduces a novel data-driven approximation method for the Koopman operator, called the RC-HAVOK algorithm.INTRODUCTION · II. A BRIEF REVIEW OF THE... · Hankel alternative view of...