Fact-checked by Grok 2 weeks ago

Higher-order singular value decomposition

The higher-order singular value decomposition (HOSVD), also referred to as the multilinear singular value decomposition, is a technique that generalizes the classical (SVD) from matrices to higher-order tensors, representing a multi-dimensional array \mathcal{A} \in \mathbb{C}^{I_1 \times I_2 \times \cdots \times I_N} (with N \geq 3) as \mathcal{A} = \mathcal{S} \times_1 \mathbf{U}^{(1)} \times_2 \mathbf{U}^{(2)} \cdots \times_N \mathbf{U}^{(N)}, where \mathcal{S} is an all-orthogonal core tensor with ordered singular values along each mode, and \mathbf{U}^{(n)} are unitary factor matrices capturing the n-mode singular vectors. This decomposition reveals the multilinear rank of the tensor and enables by truncating small singular values, with the bounded (in the Frobenius norm) by the square root of the sum of the squares of the discarded singular values across all modes. Developed in the early as a multilinear extension of the —a model originating in and in the —HOSVD was formalized to address the limitations of matrix-based methods in handling multidimensional data structures prevalent in fields like and higher-order statistics. The core tensor \mathcal{S} ensures orthogonality among its frontal, horizontal, and lateral slices, with singular values \sigma_i^{(n)} = \|\mathcal{S}_{i_n = i}\| uniquely determined and decreasing in magnitude for each mode n, while the factor matrices are unique up to phase factors or unitary combinations when singular values are distinct. Computationally, HOSVD is obtained by performing on each mode-n unfolding \mathbf{A}_{(n)} of the tensor to extract the leading singular vectors, followed by projecting \mathcal{A} onto these subspaces to form \mathcal{S}, making it efficient for large-scale tensors despite not always yielding the optimal . Truncated variants, such as the sequentially truncated HOSVD, further enhance by iteratively reducing ranks mode-by-mode, preserving key structural information. HOSVD finds broad applications in , , and feature extraction across domains including image processing (e.g., handwritten digit recognition with over 99% compression ratios), bioinformatics for integrative analysis of multi-omics data, and quantum computing for tensor-based algorithms. Its ability to handle complex-valued tensors and symmetries, such as in pair-wise symmetric cases via eigenvalue decompositions, extends its utility to advanced signal analysis and tasks.

Introduction

Definition

A tensor is a multidimensional array that generalizes scalars, vectors, and matrices to higher dimensions, often denoted as an N-way tensor \mathcal{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N}, where N \geq 3 represents the order or number of modes. The higher-order singular value decomposition (HOSVD), also known as the multilinear singular value decomposition, extends the classical (SVD) of matrices to tensors by providing a multilinear that captures the higher-order structure through successive mode-wise decompositions. In HOSVD, an N-way tensor \mathcal{X} is decomposed into a core tensor \mathcal{S} and a set of orthogonal matrices U^{(n)} for each n = 1, \dots, N, obtained via the SVD of the mode-n unfolding (or matricization) \mathbf{X}_{(n)} of \mathcal{X}. The decomposition is expressed as \mathcal{X} = \mathcal{S} \times_1 U^{(1)} \times_2 U^{(2)} \cdots \times_N U^{(N)}, where \times_n denotes the mode-n multilinear product, and the core tensor \mathcal{S} exhibits all-orthogonality, meaning its mode-n unfoldings have orthogonal columns. Unlike the matrix SVD, which factorizes a two-way array into unitary matrices and a diagonal matrix via a single decomposition, HOSVD employs multilinear products to preserve the tensor's multi-dimensional interactions, allowing for independent orthogonal bases per mode without reducing the core to a fully diagonal form. This structure enables the representation of multi-way data while highlighting dominant components along each dimension. HOSVD corresponds to a specific orthogonal variant of the more general , where the factor matrices are constrained to be orthogonal.

Historical Background

The higher-order singular value decomposition (HOSVD) builds on the , a model for higher-dimensional data. The development advanced significantly in 1966 with Ledyard R. Tucker's introduction of the three-mode model, which formalized a framework for decomposing tensors into a core tensor and factor matrices, known today as the . Tucker's approach generalized to multiple dimensions, enabling the analysis of multi-way data structures, but it allowed for non-orthogonal factors, focusing instead on capturing variance across modes. A pivotal clarification came in 2000 when Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle proposed the multilinear singular value decomposition, explicitly terming it the higher-order SVD (HOSVD) to distinguish its orthogonal variant from the general form. This evolution emphasized all-orthogonality in the factor matrices, achieved through successive on tensor unfoldings, providing a more structured and computationally tractable decomposition for higher-order tensors. In 2002, M. Alex O. Vasilescu and Demetri Terzopoulos applied HOSVD in for multilinear subspace analysis of image ensembles, demonstrating its utility in handling multidimensional data like facial variations under lighting and pose changes. These contributions marked HOSVD's transition from theoretical to practical tools.

Mathematical Foundations

Tensor Notation and Operations

Tensors of N \geq 3 extend the concept of matrices to multidimensional arrays, denoted as \mathcal{X} \in \mathbb{C}^{I_1 \times \cdots \times I_N}, where I_n represents the dimension along the n- mode for n = 1, \dots, N. The individual elements are accessed via indices x_{i_1 i_2 \cdots i_N}, with $1 \leq i_n \leq I_n. Fibers are structures obtained by fixing all indices except one; an n-mode fiber is the set of elements where all indices except i_n are held constant. Slices are lower- subtensors formed by fixing one index, resulting in a matrix for -3 tensors (e.g., frontal slice \mathbf{X}_{::k} by fixing the third index at k) or higher-dimensional arrays otherwise. The -n unfolding, or matricization, denoted \mathbf{X}_{(n)}, rearranges the tensor into a of size I_n \times (\prod_{m \neq n} I_m) by placing the n-mode fibers as columns, facilitating matrix-based operations on tensors. This unfolding is central to applying (SVD) along each mode, where \mathbf{X}_{(n)} = \mathbf{U}^{(n)} \boldsymbol{\Sigma}^{(n)} (\mathbf{V}^{(n)})^H provides the leading left singular vectors \mathbf{U}^{(n)} and singular values in \boldsymbol{\Sigma}^{(n)}. Multilinear multiplication generalizes matrix products to tensors via the mode-n product \mathcal{Y} = \mathcal{X} \times_n \mathbf{M}, where \mathbf{M} \in \mathbb{C}^{J \times I_n} is a that transforms the n-th , yielding \mathcal{Y} \in \mathbb{C}^{I_1 \times \cdots \times I_{n-1} \times J \times I_{n+1} \times \cdots \times I_N}. The elements are given by y_{i_1 \cdots i_{n-1} j i_{n+1} \cdots i_N} = \sum_{i_n=1}^{I_n} x_{i_1 \cdots i_N} m_{j i_n}. The full N- multilinear product is \mathcal{Y} = \mathcal{X} \times_1 \mathbf{U}^{(1)} \times_2 \mathbf{U}^{(2)} \cdots \times_N \mathbf{U}^{(N)}, sequentially applying mode-specific multiplications. Properties of these operations include dimension reduction or along the affected (from I_n to the row dimension of \mathbf{M}) and preservation of : if each \mathbf{U}^{(n)} has orthonormal columns, the resulting \mathcal{Y} maintains balanced multilinear structures suitable for decompositions. Tensor contractions, such as inner products \langle \mathcal{X}, \mathcal{Y} \rangle = \sum_{i_1 \cdots i_N} x_{i_1 \cdots i_N} y_{i_1 \cdots i_N}, can be expressed using the Einstein summation convention, where repeated indices imply without explicit \sum symbols.

Multilinear Rank

The multilinear rank of an Nth-order tensor \mathcal{X} \in \mathbb{C}^{I_1 \times \cdots \times I_N} is defined as the tuple (r_1, \dots, r_N), where the mode-n rank r_n is the dimension of the vector space spanned by the mode-n fibers of \mathcal{X}, equivalently the rank of its mode-n unfolding \mathbf{X}_{(n)}. This generalization extends the classical matrix rank, which applies to second-order tensors, to higher dimensions by considering the linear dependence structure along each mode separately. Each mode-n rank satisfies r_n \leq I_n, the dimension of the nth mode, allowing the multilinear rank to vary across modes and potentially differ from the tensor's overall , defined as the minimal number of rank-1 terms needed for exact representation. In the higher-order singular value decomposition (HOSVD), the multilinear rank determines the of the core tensor \mathcal{S}, which has dimensions r_1 \times \cdots \times r_N. The core tensor is all-orthogonal, meaning that for each mode n, the subtensors \mathcal{S}_{i_n = i} (obtained by fixing the n-th index to i) are pairwise orthogonal for distinct i, i.e., \langle \mathcal{S}_{i_n = i}, \mathcal{S}_{i_n = j} \rangle = 0 for i \neq j, with singular values \sigma_i^{(n)} = \|\mathcal{S}_{i_n = i}\|_F ordered decreasingly. Unlike the classical , which flattens the tensor into a and captures only pairwise interactions, the multilinear preserves the multi-way structure by quantifying dependencies by , providing a tighter bound for decompositions that respect the tensor's inherent dimensionality. For a (order-2 tensor), the multilinear reduces to (r, r), where r is the standard . In a third-order tensor, such as a set of images \mathcal{X} \in \mathbb{C}^{I \times J \times K}, the multilinear might be (r_1, r_2, r_3) with r_1 \leq I, r_2 \leq J, r_3 \leq K, reflecting separate ranks for rows, columns, and slices, which can reveal distinct factors like spatial patterns and variations across samples. This modal specificity enables HOSVD to truncate based on individual r_n for while maintaining interpretability.

The HOSVD

Formal Definition

The higher-order singular value decomposition (HOSVD) of an Nth-order tensor \mathcal{X} \in \mathbb{C}^{I_1 \times \cdots \times I_N} is formally defined as \mathcal{X} = \mathcal{S} \times_1 \mathbf{U}^{(1)} \times_2 \mathbf{U}^{(2)} \cdots \times_N \mathbf{U}^{(N)}, where each matrix \mathbf{U}^{(n)} \in \mathbb{C}^{I_n \times I_n} is unitary and consists of the left singular vectors of the mode-n unfolding \mathbf{X}_{(n)}. The core tensor \mathcal{S} \in \mathbb{C}^{I_1 \times \cdots \times I_N} is then computed as \mathcal{S} = \mathcal{X} \times_1 (\mathbf{U}^{(1)})^H \times_2 (\mathbf{U}^{(2)})^H \cdots \times_N (\mathbf{U}^{(N)})^H, where ^H denotes the Hermitian (; for real-valued tensors, this is replaced by the ordinary ^T, and the matrices \mathbf{U}^{(n)} are orthogonal rather than unitary. The core tensor \mathcal{S} possesses the all-orthogonality property: for each n, the subtensors obtained by fixing the nth index to different values (i.e., \mathcal{S}_{i_n = \alpha} and \mathcal{S}_{i_n = \beta} for \alpha \neq \beta) are orthogonal in the sense that their is zero. Additionally, these subtensors are ordered such that \|\mathcal{S}_{i_n = 1}\| \geq \|\mathcal{S}_{i_n = 2}\| \geq \cdots \geq \|\mathcal{S}_{i_n = I_n}\| \geq 0, with the norms \sigma_i^{(n)} = \|\mathcal{S}_{i_n = i}\| serving as the n-mode singular values. The HOSVD satisfies an equivariance property with respect to unitary transformations: if each \mathbf{U}^{(n)} is replaced by \mathbf{U}^{(n)} \mathbf{Q}^{(n)} where \mathbf{Q}^{(n)} is unitary, the core tensor transforms to \mathcal{S} \times_n (\mathbf{Q}^{(n)})^H for each mode n, preserving the overall decomposition. This ensures that subtensors of \mathcal{S} correspond directly to projections of the original tensor \mathcal{X} via the factor matrices. The HOSVD can be computed via the following outline (applicable to both real and complex tensors):
  • For each mode n = 1 to N:
    Compute the (SVD) of the mode-n unfolding \mathbf{X}_{(n)}.
    Extract all left singular vectors to form the unitary \mathbf{U}^{(n)}.
  • Compute the core tensor \mathcal{S} using successive n-mode products of \mathcal{X} with the Hermitian transposes (or transposes for real tensors) of the \mathbf{U}^{(n)}.
This process requires N matrix SVDs, one per .

Compact and Truncated Forms

The compact higher-order singular value decomposition (HOSVD) is a reduced- variant that exploits the multilinear of the tensor to minimize redundancy. The multilinear is the (r_1, \dots, r_N), where r_n = \rank(\mathbf{X}_{(n)}) is the of the -n unfolding \mathbf{X}_{(n)}. In this form, the decomposition retains only the r_n dominant left singular vectors for each n, yielding factor matrices \mathbf{U}^{(n)} \in \mathbb{C}^{I_n \times r_n} with orthonormal columns and a core tensor \mathcal{S} \in \mathbb{C}^{r_1 \times \cdots \times r_N} that captures the multilinear interactions. The original tensor is reconstructed via the N- tensor : \mathcal{X} = \mathcal{S} \times_1 \mathbf{U}^{(1)} \times_2 \mathbf{U}^{(2)} \cdots \times_N \mathbf{U}^{(N)}. The core tensor \mathcal{S} in the compact HOSVD is computed through successive mode products: \mathcal{S} = \mathcal{X} \times_1 (\mathbf{U}^{(1)})^H \times_2 (\mathbf{U}^{(2)})^H \cdots \times_N (\mathbf{U}^{(N)})^H. This process ensures \mathcal{S} is all-orthogonal. The singular values of the mode-n unfolding of \mathcal{X} are the norms of the corresponding subtensors of \mathcal{S}, and since singular values beyond r_n are zero, the compact form provides an exact representation without loss of information. The truncated HOSVD extends this by selecting only the top R_n \leq r_n singular vectors per mode to obtain a low-multilinear-rank approximation \tilde{\mathcal{X}} = \tilde{\mathcal{S}} \times_1 \tilde{\mathbf{U}}^{(1)} \cdots \times_N \tilde{\mathbf{U}}^{(N)}, where \tilde{\mathbf{U}}^{(n)} \in \mathbb{C}^{I_n \times R_n} and \tilde{\mathcal{S}} \in \mathbb{C}^{R_1 \times \cdots \times R_N}. This yields \tilde{\mathcal{X}} as a multilinear rank-(R_1, \dots, R_N) tensor, providing a practical balance between accuracy and complexity. The approximation error in the Frobenius norm satisfies \|\mathcal{X} - \tilde{\mathcal{X}}\|_F \leq \sqrt{N} \min_{\rank-(\mathbf{R})} \|\mathcal{X} - \mathcal{Y}\|_F, where the minimum is over all tensors \mathcal{Y} of multilinear rank (R_1, \dots, R_N); this bound guarantees the truncated HOSVD does not exceed \sqrt{N} times the optimal low-rank error. In compression applications, the compact and truncated HOSVD significantly reduce needs compared to the full tensor. The full \mathcal{X} requires \prod_{n=1}^N I_n entries, whereas the compact form stores \sum_{n=1}^N r_n I_n + \prod_{n=1}^N r_n parameters (factor matrices plus core), and further lowers this to \sum_{n=1}^N R_n I_n + \prod_{n=1}^N R_n when R_n < r_n. This multilinear structure enables efficient representation of high-dimensional , such as images or signals, with minimal .

Properties

Interpretation

The higher-order singular value decomposition (HOSVD) provides a multilinear of the matrix , offering a to interpret multi-way data through orthogonal transformations along each mode. The core tensor \mathcal{S} in the HOSVD encodes the multilinear interactions among the modes of the original tensor, with its entries serving as generalized singular values that quantify the strength of these interactions. Unlike the diagonal structure in matrix , \mathcal{S} is all-orthogonal, meaning its frontal, horizontal, and lateral slices are pairwise orthogonal, which preserves the essential coupling between modes while facilitating a structured representation of the data's intrinsic multilinear structure. The factor matrices U^{(n)} for each mode n consist of orthonormal columns that act as principal components or basis vectors, capturing the directions of maximum variance in the unfolding of the tensor along that mode. These matrices reveal the dominant subspaces for mode-specific variations, analogous to the left and right singular vectors in matrix SVD, but adapted to the tensor's multi-dimensional nature. The all-orthogonality property of the decomposition ensures decorrelation across different modes, similar to how identifies uncorrelated directions in multi-way data, thereby providing a clean separation of modal contributions without cross-mode interference. Geometrically, for a third-order tensor, the HOSVD can be visualized as a set of orthogonal subspaces—one per —intersecting at the core tensor \mathcal{S}, where the factor matrices transform these subspaces to reconstruct the original tensor via mode-n products. This intersection highlights how the core tensor "glues" the modal bases together, emphasizing the tensor's multi-linear over pairwise interactions. From a statistical perspective, the singular values along n in \mathcal{S} measure the importance of variations in that mode, indicating the relative contribution of each principal direction to the overall tensor variance and guiding the selection of low-rank approximations.

Uniqueness and All-Orthogonality

The higher-order singular value decomposition (HOSVD) of an N-way tensor \mathcal{X} yields a tensor \mathcal{S} that possesses the property of all-orthogonality across each . Specifically, for each n = 1, \dots, N, the mode-n fibers of \mathcal{S} are pairwise orthogonal, meaning that any two mode-n fibers differing only in their position along the nth have zero inner product. Equivalently, the subtensors of \mathcal{S} obtained by fixing the index in n to distinct values \alpha \neq \beta are orthogonal with respect to the . This all-orthogonality manifests in the mode-n unfolding \mathbf{S}_{(n)} of the core tensor, where the rows (or columns, depending on the matricization ) are mutually orthogonal, and the squared Frobenius norms of these rows correspond to the nth-mode singular values \sigma_i^{(n)}. Consequently, the takes the form \mathbf{S}_{(n)} \mathbf{S}_{(n)}^T = \operatorname{diag}\bigl( (\sigma_1^{(n)})^2, \dots, (\sigma_{I_n}^{(n)})^2 \bigr), where I_n is the of mode n, establishing a block-diagonal structure that generalizes the diagonal form of the matrix in standard . Regarding uniqueness, the nth-mode singular values \sigma_i^{(n)} of the HOSVD are uniquely determined for each mode, as they are the singular values of the mode-n unfolding of \mathcal{X}. The corresponding leading singular vectors forming the columns of \mathbf{U}^{(n)} are unique up to sign changes (for real-valued tensors) when the singular values are distinct within the mode; if multiple singular values are equal, the subspace spanned by the associated singular vectors is unique, though the individual vectors within that subspace may form any . This ensures that the HOSVD is essentially under the orthogonality constraints of the factor matrices. These properties arise from the construction of the HOSVD via successive matrix singular value decompositions of the tensor unfoldings, where the orthogonality of the factor matrices \mathbf{U}^{(n)} propagates multilinearly to the core tensor through the mode-n products, inheriting the uniqueness and orthogonality guarantees from standard SVD.

Computation Methods

Classical HOSVD Algorithm

The classical Higher-order singular value decomposition (HOSVD) algorithm computes the decomposition of an Nth-order tensor \mathcal{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N} through a sequential process that involves singular value decompositions (SVDs) of the tensor's mode-n unfoldings, followed by multilinear products to obtain the core tensor. This method, introduced by De Lathauwer et al., produces orthogonal factor matrices U^{(n)} \in \mathbb{R}^{I_n \times I_n} for each mode n = 1, \dots, N, and an all-orthogonal core tensor \mathcal{S}, such that \mathcal{X} = \mathcal{S} \times_1 U^{(1)} \times_2 U^{(2)} \cdots \times_N U^{(N)}. The algorithm proceeds in two main phases. First, for each mode n, the tensor is unfolded into a \mathbf{X}_{(n)} \in \mathbb{R}^{I_n \times J_n} where J_n = \prod_{k \neq n} I_k, and the \mathbf{X}_{(n)} = U^{(n)} \Sigma^{(n)} (V^{(n)})^T is computed to extract the leading I_n left singular vectors forming U^{(n)}. Second, the core tensor is formed as \mathcal{S} = \mathcal{X} \times_1 (U^{(1)})^T \times_2 (U^{(2)})^T \cdots \times_N (U^{(N)})^T, which ensures the all-orthogonality property of \mathcal{S}. Pseudocode for the classical HOSVD is as follows:
Input: Tensor X of size I_1 × ... × I_N
Output: Factor matrices U^(1), ..., U^(N); core tensor S

for n = 1 to N do
    Compute unfolding X_(n)
    Compute SVD: X_(n) = U^(n) Σ^(n) (V^(n))^T
    Retain U^(n) (I_n × I_n [orthogonal matrix](/page/Orthogonal_matrix))
end for

S = X ×_1 (U^(1))^T ×_2 (U^(2))^T ⋯ ×_N (U^(N))^T
This sequential loop relies on standard routines for each unfolding, assuming access to efficient libraries. The computational complexity is dominated by the N SVD computations, each on a of size I_n \times \prod_{k \neq n} I_k, with SVD cost O(I_n (\prod_{k \neq n} I_k)^2) in the typical case where I_n \leq \prod_{k \neq n} I_k; thus, the total cost is O\left( \sum_{n=1}^N I_n \left( \prod_{k \neq n} I_k \right)^2 \right). For balanced dimensions where all I_n \approx I, this simplifies to O(N I^{2N-1}), highlighting the exponential scaling in tensor order N. The algorithm assumes full in-memory storage of the tensor \mathcal{X} and produces orthogonal (unitary for cases) factor matrices U^{(n)} spanning the column space of each unfolding. A key limitation is the lack of iterative optimization, making the decomposition quasi-optimal rather than the best in the Frobenius norm sense; improvements via , such as higher-order orthogonal iteration, can address this.

Higher-Order Orthogonal Iteration

The Higher-Order Orthogonal Iteration (HOOI) serves as an to the Higher-Order Singular Value Decomposition (HOSVD), aimed at obtaining a superior of a tensor by optimizing the in the least-squares sense. Unlike the one-pass nature of HOSVD, HOOI alternates between updating the orthogonal factor matrices and the core tensor until , yielding a local optimum that generally outperforms the initial HOSVD . This method is particularly effective for achieving better and representation in applications requiring precise low-rank tensor models. The HOOI algorithm proceeds by sequentially updating each mode-n factor matrix U^{(n)} for n = 1, \dots, N, where N is the , followed by an update to tensor \mathcal{G}. Specifically, at k, the factor matrix U^{(n),k} is obtained as the leading R_n left singular vectors of the mode-n unfolding of the tensor \mathcal{Y}^{(n),k} = \mathcal{X} \times_{m \neq n} (U^{(m),k-1})^T, where \mathcal{X} is the input tensor and R_n denotes the target multilinear rank in mode n. U^{(n),k} = \text{leading } R_n \text{ left singular vectors of } \mathcal{Y}_{(n)}^{k} = \left( \mathcal{X} \times_{m \neq n} (U^{(m),k-1})^T \right)_{(n)} After updating all factor matrices in a sweep, the core tensor is recomputed via the multilinear product \mathcal{G}^k = \mathcal{X} \times_1 (U^{(1),k})^T \times_2 (U^{(2),k})^T \cdots \times_N (U^{(N),k})^T. The process repeats over multiple sweeps, with each full cycle resembling an alternating least squares step tailored to enforce orthogonality in the factor matrices. HOOI typically converges in 5-10 iterations to a local optimum, providing a tighter than HOSVD by iteratively minimizing the Frobenius norm error. The stopping criterion is commonly set as the relative change in the Frobenius norm of the reconstructed tensor falling below a small threshold \epsilon, such as $10^{-6}. In relation to the , HOOI computes the full orthogonal Tucker model, often initialized using the factor matrices from the classical HOSVD to accelerate .

Efficient Variants

Efficient variants of the higher-order singular value decomposition (HOSVD) address the computational challenges of the classical algorithm, particularly for large-scale tensors, by optimizing memory usage and reducing data movement without requiring full tensor storage or excessive auxiliary space. These methods build on the core idea of the classical HOSVD, which involves successive decompositions (SVDs) of tensor unfoldings followed by core tensor updates, but introduce overlaps and in-place operations to enhance practicality. One prominent approach is the interlacing computation, which overlaps the SVD steps with partial core tensor updates to minimize intermediate storage and input/output (I/O) overhead. In this method, after computing the SVD for a given mode, the resulting semi-orthogonal frame is immediately applied to truncate the core tensor in subsequent unfoldings, proceeding iteratively across modes until the target multilinear rank is achieved; the order of mode processing can influence the result but typically yields an approximation close to the standard HOSVD. This sequential refinement avoids materializing the full core tensor at once, making it suitable for memory-constrained environments or streaming data where the tensor is processed mode-by-mode. For structured or sparse tensors, interlacing reduces computational complexity by focusing on reduced-dimensional unfoldings, achieving qualitative efficiency gains over non-interlaced variants in iterative tensor network applications. In-place HOSVD variants further optimize by overwriting the original tensor during computation, exemplified by the Fused In-place Sequentially Truncated HOSVD (FIST-HOSVD) algorithm for the sequentially truncated HOSVD (ST-HOSVD). FIST-HOSVD fuses the tensor-times-matrix (TTM) multiplication and computation into a single kernel, using an in-place iterative to update the tensor directly with matrices, thereby eliminating the need for separate of results or the full . This approach reduces auxiliary from O(∏ I_r) to O(I_max²), where I_r are reduced dimensions and I_max is the largest original dimension, enabling decomposition of tensors too large for standard methods. Experimental results on large datasets demonstrate savings up to 135-fold (e.g., 1.2 GB versus 161.9 GB for a 672×672×33 tensor), while maintaining comparable runtime to baseline ST-HOSVD (e.g., 105.7 seconds versus 125.2 seconds). Such variants are particularly effective for dense, high-dimensional tensors in scientific , though they assume access to the full tensor upfront and are less tailored for inherently sparse or streaming scenarios.

Approximation and Compression

Low-Rank Approximation

The of a higher-order tensor \mathcal{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N} is obtained via the truncated higher-order singular value decomposition (T-HOSVD), in which each factor matrix U^{(n)} from the full HOSVD is truncated by retaining only its leading R_n columns (R_n \leq I_n), where these columns correspond to the largest n-mode singular values. The reduced core tensor \tilde{\mathcal{S}} is then computed as the multilinear projection \tilde{\mathcal{S}} = \mathcal{X} \times_1 (\tilde{U}^{(1)})^T \times_2 \cdots \times_N (\tilde{U}^{(N)})^T, yielding the approximating tensor \tilde{\mathcal{X}} = \tilde{\mathcal{S}} \times_1 \tilde{U}^{(1)} \times_2 \cdots \times_N \tilde{U}^{(N)} of multilinear (R_1, \dots, R_N). This truncation enables substantial data by storing the original tensor's information in the compact form, reducing the total number of parameters from \prod_{n=1}^N I_n to \prod_{n=1}^N R_n + \sum_{n=1}^N R_n I_n; the is especially pronounced when R_n \ll I_n for all modes, as the matrices the only if the core is small relative to the original dimensions. In denoising applications, T-HOSVD preserves the underlying signal structure by projecting the tensor onto the dominant low-rank spanned by the leading singular vectors, while suppressing components aligned with the discarded smaller singular values. For instance, in , a 3D tensor representation (with modes for height, width, and time) can be truncated separately along spatial modes to retain fine details and along the temporal mode to capture motion patterns, thereby achieving both and in the reconstructed sequence. The T-HOSVD approximation is quasi-optimal, providing the best low-rank Tucker representation among all decompositions with orthogonal factor matrices and the specified multilinear rank.

Error Analysis

The approximation quality of the truncated higher-order singular value decomposition (HOSVD) is quantified primarily in the Frobenius norm, providing an upper bound on the reconstruction error relative to the original tensor \mathcal{X}. For a tensor \tilde{\mathcal{X}} obtained by truncating the HOSVD to multilinear ranks (R_1, \dots, R_N), the error satisfies \|\mathcal{X} - \tilde{\mathcal{X}}\|_F \leq \sqrt{\sum_{n=1}^N \sigma_{R_n+1}^{(n)2}}, where \sigma_{R_n+1}^{(n)} denotes the (R_n+1)-th singular value of the mode-n unfolding \mathbf{X}_{(n)}. This bound arises from the orthogonal projections in each mode and ensures that the discarded energy in each unfolding contributes additively to the total error. This truncation is quasi-optimal with respect to the best low-multilinear- . Specifically, in the worst case, \|\mathcal{X} - \tilde{\mathcal{X}}\|_F \leq \sqrt{N} \inf_{\mathcal{Y}: \operatorname{rank}(\mathbf{Y}_{(n)}) \leq R_n \ \forall n} \|\mathcal{X} - \mathcal{Y}\|_F, where N is the tensor and the infimum is over all tensors \mathcal{Y} with the prescribed mode s. The factor \sqrt{N} reflects the accumulation of suboptimality across modes, making the bound looser for higher- tensors. The error is particularly sensitive to the rate of mode-wise singular values: rapid in all modes yields small errors, as the singular values \sigma_{R_n+1}^{(n)} diminish quickly, whereas slow amplifies the loss. Compared to the optimal (e.g., via higher-order orthogonal iteration), truncated HOSVD can exhibit larger errors for very low-rank settings, where the quasi-optimality factor \sqrt{N} degrades performance relative to the global minimum. In contrast, it often outperforms the CANDECOMP/PARAFAC () decomposition when the data exhibits strong multi-way structure, as CP enforces a uniform rank across all modes and struggles with mode-specific variabilities. Empirically, on natural datasets such as images or hyperspectral data with exponential decay, truncated HOSVD achieves near-optimal errors, frequently within 1-2% of the optimum.

Applications

Signal and Image Processing

In signal and image processing, higher-order singular value decomposition (HOSVD) has been instrumental in handling multi-dimensional data structures, such as those arising from image ensembles and video sequences, by decomposing them into core tensors and orthogonal factor matrices that capture inherent multi-way structures. A seminal application is in face recognition, where the TensorFaces method treats collections of facial images as third-order tensors with modes for pixels, individuals, and variations like lighting or pose. By applying HOSVD to this tensor, the approach extracts multilinear subspaces that separate identity from extraneous factors, achieving superior recognition accuracy compared to traditional eigenfaces methods on datasets like the CMU database. This tensor-based representation enables robust handling of variability in real-world imaging conditions. For video compression, HOSVD facilitates the of space-time-volume tensors representing video data, where modes correspond to spatial dimensions and time, allowing of smaller singular values to reduce storage requirements while preserving perceptual quality. In practice, this of the model derived from HOSVD has been shown to outperform standard 2D codecs like JPEG2000 in multi-frame scenarios by exploiting temporal redundancies, without significant visual artifacts. Such techniques are particularly effective for streaming applications, as they maintain frame consistency across the tensor structure. In , HOSVD aids in for modalities like MRI and by decomposing volumetric or spatio-temporal tensors into low-rank components that isolate signal from noise. For instance, in 3D MRI denoising, HOSVD-based filtering thresholds singular values in each mode to suppress Rician noise while retaining anatomical details, leading to noticeable improvements in peak signal-to-noise ratios in brain scans. A recent advancement applies HOSVD to nondestructive , decomposing signals across spatial, temporal, and contrast dimensions to separate bound microbubbles from free ones, enabling real-time molecular targeting without destructive pulsing and enhancing contrast-to-noise ratios by an average of 12 dB in vascular models. HOSVD has also advanced by processing multi-view video tensors to extract view-invariant motion features for human identification. In multi-view setups, gait sequences from multiple cameras form a fourth-order tensor (pixels × time × view × subject), and HOSVD decomposes it to isolate -style factors from viewpoint and pose variations, yielding recognition accuracies exceeding 90% on datasets like the USF HumanID Gait Database across 180° view changes. Low-rank approximations from this decomposition further support denoising in , enhancing feature reliability. The key advantage of HOSVD in these domains lies in its ability to capture multi-way correlations—such as joint spatial, temporal, and attribute dependencies—that are overlooked by pairwise 2D , leading to more efficient representations and improved performance in high-dimensional tasks.

Data Analysis and Machine Learning

Higher-order singular value decomposition (HOSVD) has emerged as a powerful tool for analyzing high-dimensional tensor-structured data in , enabling the integrative analysis of multi-omics datasets. In particular, HOSVD facilitates the decomposition of tensors representing genes across conditions and samples, such as data from multiple studies, to uncover shared patterns and reduce noise while preserving biological relevance. For instance, a seminal application in 2007 demonstrated HOSVD's utility in transforming tensors (genes × conditions × samples) to identify coregulated gene sets across diverse experimental settings, improving the detection of biologically significant modules compared to traditional two-way methods. This approach has been extended to multi-omics , where HOSVD extracts latent factors that capture interactions between genomic, transcriptomic, and proteomic layers, aiding in the discovery of disease-associated pathways. In recommender systems, HOSVD supports the modeling of user-item- interactions through higher-order tensor , enhancing personalized predictions by incorporating multifaceted . By decomposing user × item × tensors, HOSVD reveals latent factors that account for contextual influences like time or location, outperforming matrix-based in sparse sets. A key contribution in this domain is the use of HOSVD within N-dimensional tensor frameworks to approximate low-rank representations, enabling scalable recommendations in social tagging and applications. This method preserves the multilinear structure of the , leading to more accurate predictions, as evidenced by improved metrics in context-aware scenarios. HOSVD also underpins multi-way clustering techniques for extracting latent factors in biomedical applications, such as and . In multi-way clustering, HOSVD decomposes tensors of across tissues, individuals, and conditions to identify co-clusters that represent shared regulatory modules, facilitating the of disease progression through temporal patterns. For , HOSVD-based unsupervised feature extraction on integrated tensors (e.g., × lines × drug perturbations) has identified candidate therapeutics by highlighting drug-target-disease interactions, with demonstrated success in prioritizing compounds for experimental validation. These applications leverage HOSVD's ability to handle heterogeneous data modes, yielding interpretable clusters that inform generation in precision medicine. Recent advancements include adaptive variants of HOSVD for imputing missing values in spatial-temporal datasets, crucial for real-world where observations are often incomplete. Adaptive HOSVD employs dynamic thresholding during decomposition to reconstruct missing entries in tensors representing monitoring data (e.g., sensors × locations × time), achieving higher imputation accuracy than non-adaptive methods in scenarios like infrastructure health assessment. In the , this has been applied to tunnel digital twin models, where adaptive HOSVD imputes spatial-temporal gaps with errors reduced by up to 20% compared to baseline tensor completion techniques. As of 2025, HOSVD has seen increasing adoption in for federated tensor analysis, enabling privacy-preserving feature extraction in multi-institutional multi-omics studies and improving model robustness in distributed systems. Within pipelines, HOSVD serves as a preprocessing step for tensor-structured data fed into neural networks, performing analogous to tensor (). By computing the multilinear rank to select dominant modes, HOSVD extracts compact features from high-order arrays, such as video or sensor data, before network training, which mitigates the curse of dimensionality and improves convergence. This tensor analogue has been integral in feature extraction for tasks, where HOSVD-derived components enhance model performance on multi-modal datasets.

Variants and Extensions

Robust L1-Norm HOSVD

The standard higher-order singular value decomposition (HOSVD), relying on the Frobenius (L2) norm, is highly sensitive to outliers and impulsive noise in tensor data, as the squared error term amplifies the impact of large deviations. To address this limitation, the robust L1-norm HOSVD variant replaces the L2 norm with the L1 norm, which sums absolute deviations and inherently promotes sparsity while reducing the influence of outliers, drawing inspiration from robust principal component analysis methods. This approach is particularly valuable for real-world tensors contaminated by sparse corruptions, such as in imaging or sensor data affected by gross errors. The L1-norm HOSVD, often denoted as L1-HOSVD, formulates the low-rank tensor approximation problem as minimizing the L1 norm of the : \min_{\mathcal{S}, \{U^{(n)}\}} \left\| \mathcal{X} - \mathcal{S} \times_1 U^{(1)} \cdots \times_N U^{(N)} \right\|_1 subject to the constraints U^{(n)T} U^{(n)} = I_{r_n} for each mode n = 1, \dots, N, where \mathcal{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N} is the input tensor, \mathcal{S} \in \mathbb{R}^{r_1 \times \cdots \times r_N} is the core tensor, and r_n \leq I_n are the multilinear ranks. Equivalently, this can be approached by maximizing the L1 norm of the projected tensor onto the orthogonal subspaces, akin to L1-principal component analysis applied mode-wise. Solutions typically employ proximal optimization techniques, such as the augmented method, or iterative reweighted to handle the non-smooth L1 objective. The core algorithm for L1-HOSVD proceeds by alternating L1-SVD approximations across each mode: for mode n, it solves an L1-principal component problem on the mode-n unfolding of the tensor, yielding the leading r_n left singular vectors as U^{(n)}, while fixing other factors. This mode-wise optimization is repeated until convergence, often initialized with standard HOSVD bases for efficiency. In practice, the core tensor \mathcal{S} is then computed via multilinear multiplication of \mathcal{X} with the obtained factors. This L1-HOSVD serves as an initialization for more refined methods like L1-norm higher-order orthogonal iterations (L1-HOOI), which iteratively update each U^{(n)} by projecting onto intermediate tensors derived from prior factors. Applications of L1-HOSVD center on robust tensor completion and reconstruction in the presence of contaminated data, such as denoising images or recovering missing entries in multispectral datasets with impulsive noise. For instance, in face recognition tasks using tensor representations, it effectively handles occlusions or corruptions by suppressing outlier effects during low-rank approximation. It also extends to classification problems, like digit recognition from noisy tensorized inputs, where the robust factors preserve essential structural information. In performance evaluations, L1-HOSVD demonstrates superior robustness compared to standard HOSVD, achieving lower normalized squared (MNSE) in tasks with 10-20% corruption—for example, MNSE below 0.1 versus over 2 for HOSVD when outlier noise variance exceeds the signal level. In benchmarks on datasets like MNIST or AT&T faces with heavy-tailed noise (e.g., 20% corruption rate), it yields accuracies around 89% versus 76% for the standard method, highlighting its effectiveness in -prone scenarios without sacrificing performance on clean data. This variant aligns with broader L1-norm Tucker decompositions but emphasizes the HOSVD initialization for computational tractability.

Generalized and Quantum HOSVD

The higher-order generalized singular value decomposition (HO-GSVD) extends the classical generalized singular value decomposition to N ≥ 2 matrices with potentially different row dimensions, facilitating comparative analysis by identifying shared generalized singular subspaces across datasets. This decomposition factors a collection \{A_k\}_{k=1}^N of matrices as A_k = U \Sigma_k (V^{(k)})^T to reveal common structural components, such as dominant modes in multi-omics or multi-sensor data. Originally proposed for full-rank cases, extensions handle rank-deficient matrices by modifying the subspace extraction to preserve essential ratios of generalized singular values. In the 2020s, quantum algorithms for higher-order singular value decomposition (Q-HOSVD) have emerged, utilizing quantum singular value decomposition to provide exponential speedup over classical methods for decomposing large-scale tensors. These approaches prepare quantum states representing the tensor and apply quantum circuits to extract the core tensor—containing the tensor singular values—and the orthogonal factor matrices, enabling efficient processing of high-dimensional data intractable on classical hardware. Such quantum variants are particularly promising for applications in quantum machine learning where tensor representations model complex correlations. Other recent extensions include adaptive HOSVD frameworks tailored for tensor completion and imputation tasks, which dynamically adjust thresholds during to reconstruct entries in spatial-temporal datasets, such as monitoring data for digital twins. Additionally, reduced HOSVD variants have been integrated into formats like and , enabling scalable low-rank approximations by sequentially applying SVDs along tensor modes while maintaining network topology for ultra-high-dimensional problems. These developments relate to non-orthogonal tensor decompositions like /PARAFAC by allowing hybrid representations that trade orthogonality for sparsity in generalized subspaces, though such connections remain underexplored. A persistent challenge across these extensions is to ultra-high dimensions, where exponential growth in storage and computation demands innovative randomized or hierarchical strategies.

References

  1. [1]
    A Multilinear Singular Value Decomposition | SIAM Journal on ...
    4 (2000)10.1137 ... A Multilinear Singular Value Decomposition. Authors: Lieven De Lathauwer, Bart De Moor, and Joos VandewalleAuthors Info & Affiliations.
  2. [2]
    [PDF] The Higher-Order Singular Value Decomposition: Theory and an ...
    Jun 15, 2010 · The Higher-Order Singular Value Decomposition: Theory and an ... Tensor modeling and algorithms for computing various tensor decomposi-.
  3. [3]
    A tensor higher-order singular value decomposition for integrative ...
    We describe the use of a higher-order singular value decomposition (HOSVD) in transforming a data tensor of genes × “x-settings,” that is, different settings ...
  4. [4]
    Tensor Decompositions and Applications | SIAM Review
    This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
  5. [5]
    The Expression of a Tensor or a Polyadic as a Sum of Products
    The Expression of a Tensor or a Polyadic as a Sum of Products. Frank L. Hitchcock,. Frank L. Hitchcock. Search for more papers by this author.
  6. [6]
    A New Truncation Strategy for the Higher-Order Singular Value ...
    We present an alternative strategy for truncating the higher-order singular value decomposition (T-HOSVD).Missing: compact | Show results with:compact
  7. [7]
    On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of ...
    On the Best Rank-1 and Rank-(R1,R2,. . .,RN) Approximation of Higher-Order Tensors. Authors: Lieven De Lathauwer, Bart De Moor, and Joos VandewalleAuthors Info ...
  8. [8]
  9. [9]
  10. [10]
    A tensor higher-order singular value decomposition for integrative ...
    Nov 20, 2007 · We describe the use of a higher-order singular value decomposition (HOSVD) in transforming a data tensor of genes x "x-settings," that is, different settings ...Missing: multi- omics
  11. [11]
    Tensor higher-order singular value decomposition for integrative ...
    Aug 6, 2025 · We describe the use of a higher-order singular value decomposition (HOSVD) in transforming a data tensor of genes × “x-settings,” that is, ...
  12. [12]
    [PDF] Tensor Methods and Recommender Systems - arXiv
    Feb 18, 2018 · Tensor-based recommender models use tensor factorization to consider the multifaceted nature of environments, pushing boundaries of traditional ...
  13. [13]
    Identification of candidate drugs using tensor-decomposition-based ...
    Oct 23, 2017 · In this paper, I applied tensor-decomposition-based unsupervised feature extraction to the integrated analysis using a mathematical product of gene expression ...Results · Methods · Tensor Generation For...
  14. [14]
    Identifying Multi-Dimensional Co-Clusters in Tensors Based on ...
    In this paper, a co-clustering method based on hyperplane detection in singular vector spaces (HDSVS) is proposed. Specifically in this method, higher-order ...Missing: surveillance | Show results with:surveillance
  15. [15]
    An adaptive high-order singular value decomposition for spatial and ...
    The adaptive HOSVD method uses tensor decomposition to impute missing values in spatial-temporal data, with an adaptive thresholding mechanism and gradient ...
  16. [16]
    [PDF] EXTENSION OF PCA TO HIGHER ORDER DATA STRUCTURES
    Tensors are multidimensional arrays. Conventional PCA cannot discover hidden components in them, requiring new methods for multilinear dimensionality reduction.
  17. [17]
  18. [18]
    [PDF] Robust Tucker Tensor Decomposition for Effective Image ...
    In this paper, we propose a robust Tucker tensor decom- position model (RTD) to suppress the influence of outliers, which uses L1-norm loss function. Yet, the ...
  19. [19]
    [PDF] L1-Norm Higher-Order Orthogonal Iterations for Robust Tensor ...
    Tucker decomposition is a popular tensor factorization approach, typically carried out by the Higher-. Order Singular-Value Decomposition (HOSVD) and Higher-.
  20. [20]
    A Higher-Order Generalized Singular Value Decomposition for Rank ...
    The higher-order generalized singular value decomposition (HO-GSVD) is a matrix factorization technique that extends the GSVD to N ≥ 2 data matrices.
  21. [21]
    A Higher-Order Generalized Singular Value Decomposition for Rank ...
    Feb 19, 2021 · The higher-order generalized singular value decomposition (HO-GSVD) is a matrix factorization technique that extends the GSVD to N \ge 2 data matrices.
  22. [22]
    [1908.00719] Quantum Higher Order Singular Value Decomposition
    Aug 2, 2019 · In this paper, we present two quantum algorithms for HOSVD. Our methods allow one to decompose a tensor into a core tensor containing tensor singular values.
  23. [23]
  24. [24]
    Revisiting High-Order Tensor Singular Value Decomposition From ...
    Sep 8, 2024 · Recent advances explore tensor network decompositions including Tensor Train (TT) (Oseledets 2011), Tensor Ring (TR) (Li and So 2021), and tubal ...