Multidimensional system
A multidimensional system is a mathematical framework in systems theory that models dynamic processes or signals as functions of multiple independent variables, such as time and space, in contrast to one-dimensional systems governed by a single variable like time alone.[1] These systems are prevalent in disciplines including signal processing, where they describe data structures like two-dimensional images or three-dimensional videos, and control theory, where they handle spatially distributed or repetitive processes.[2][3] Key aspects of multidimensional systems include their representation using partial differential or difference equations, with linear shift-invariant systems characterized by multidimensional convolutions and Fourier transforms for analysis and filtering.[2] For instance, a two-dimensional signal x[n_1, n_2] can be processed via convolution y[n_1, n_2] = \sum_{m_1} \sum_{m_2} x[m_1, m_2] h[n_1 - m_1, n_2 - m_2], enabling efficient operations like separable filtering that reduce computational complexity from O(N^4) to O(2N^3).[2] In control applications, stability analysis is more complex due to the multi-variable nature, often requiring tools like Gröbner bases or behavioral approaches to ensure proper controller design.[1] The importance of multidimensional systems stems from their role in modern technologies, such as image reconstruction, array signal processing, and networked control systems, where handling high-dimensional data is essential for applications in healthcare, 3D imaging, and real-time multimedia processing.[1][4] Challenges include ensuring stability in higher dimensions and leveraging parallel computing architectures like VLSI for efficient implementation.[4] Ongoing research focuses on innovative synthesis techniques, such as factorization of lossless matrices, to advance filter design and system performance.[4]Fundamentals
Definition and Scope
A multidimensional system, or m-D system with m > 1, refers to a system whose inputs and outputs are functions of m independent variables, extending beyond the single-variable case of traditional one-dimensional systems. These variables typically represent spatial coordinates, time, or combinations thereof; for example, a two-dimensional (2D) discrete signal for an image might be denoted as x(i, j), where i and j are integer indices corresponding to row and column positions. In a three-dimensional (3D) case, such as video data, the signal could be x(i, j, k), incorporating two spatial dimensions and a temporal one.[5] The scope of multidimensional systems includes both continuous-time and discrete-time formulations, with signals defined over domains like \mathbb{R}^m for continuous cases or \mathbb{Z}^m for discrete lattices. The theory predominantly emphasizes linear shift-invariant (LSI) systems in the discrete domain or linear time-invariant (LTI) systems in the continuous domain, where the output is obtained via multidimensional convolution with an impulse response. Nonlinear multidimensional systems exist as extensions but receive less theoretical attention due to increased complexity.[6] Distinctive characteristics of m-D systems arise from their higher-dimensional structure, including commuting shift operators across dimensions that nonetheless introduce partial orders for causality—unlike the total order in one dimension—complicating recursive implementations. Additionally, the lack of an analogue to the fundamental theorem of algebra prevents straightforward factorization of multivariate polynomials into linear factors, resulting in specialized challenges for stability testing (e.g., ensuring no zeros in the unit polydisk) and realization procedures.[7]Historical Development
The study of multidimensional systems began in the 1960s as an extension of one-dimensional signal processing techniques to accommodate signals varying in multiple dimensions, particularly for image filtering applications. Early contributions focused on adapting recursive filtering methods to two-dimensional data, with Thomas S. Huang's work on digital picture processing marking a foundational step in handling spatial signals computationally.[8] This period saw the transition from analog to digital approaches, driven by advances in computing that enabled practical implementation of 2D filters for tasks like enhancement and restoration.[9] Key milestones in the 1970s included the formalization of the two-dimensional z-transform, which provided essential tools for analyzing multidimensional signals in the z-domain, as explored in works like those by Huang and collaborators. Concurrently, state-space models were developed to describe the dynamics of multidimensional systems more comprehensively. Roesser's 1975 discrete state-space model generalized one-dimensional frameworks to two dimensions for linear image processing, introducing a local state representation partitioned along spatial axes.[10] Shortly thereafter, Fornasini and Marchesini proposed their 1976 state-space realization theory for two-dimensional filters, offering algebraic criteria for controllability and observability that extended to higher dimensions. Influential publications in the early 1980s synthesized these developments, with the book Multidimensional Digital Signal Processing by Dan E. Dudgeon and Russell M. Mersereau serving as a comprehensive reference for theory, transforms, and filter design.[11] The 1990s shifted emphasis toward computational efficiency, leveraging emerging digital signal processing hardware to implement multidimensional algorithms in real-time applications like video processing. In the post-2000 era, multidimensional systems have integrated with array processing techniques and sparse representations, enhancing efficiency in handling high-dimensional data such as sensor arrays and volumetric images. For instance, shearlet systems have emerged as powerful tools for sparse approximations in multidimensional domains, improving compression and analysis in signal processing tasks. More recently, from the 2010s to 2025, advances have included the incorporation of deep learning techniques for multidimensional signal analysis, such as convolutional neural networks for enhanced image and video processing, and topological signal processing for modeling complex data structures.[12][13]Applications
In Signal and Image Processing
In signal and image processing, multidimensional systems provide the theoretical foundation for handling signals indexed by multiple independent variables, such as spatial coordinates in images or spatiotemporal dimensions in videos, enabling efficient filtering and analysis of complex data structures. These systems extend one-dimensional (1D) signal processing techniques to higher dimensions, where operations like convolution and filtering must account for interactions across all indices to preserve signal integrity and avoid artifacts. For instance, two-dimensional (2D) finite impulse response (FIR) and infinite impulse response (IIR) filters are widely applied for image enhancement, edge detection, and restoration tasks, with FIR filters offering linear phase characteristics ideal for smoothing noise while preserving edges, and IIR filters providing computational efficiency through recursion for tasks like deblurring degraded images.[14][15] A key technique in these applications is multidimensional convolution, which computes the output signal as a weighted sum over multiple dimensions, allowing filters to capture local patterns such as textures or gradients in images more effectively than separable 1D operations. In 2D recursive (IIR) filters, quarter-plane causality is enforced to ensure computational feasibility and stability, restricting the filter's support to the first quadrant of the index plane (non-negative shifts in both directions) and thereby preventing dependence on future samples that could introduce non-causal delays in real-time processing. Extending to three dimensions, multidimensional systems are applied in video processing for tasks such as motion compensation and artifact reduction, exploiting correlations across spatial and temporal dimensions to achieve efficient compression.[16][17] Post-2000 advancements have integrated multidimensional systems into deep learning frameworks, particularly convolutional neural networks (CNNs) for computer vision, where convolutional layers perform multidimensional convolutions to extract hierarchical features from images, enabling applications like object recognition with reduced parameters compared to fully connected networks. Similarly, multidimensional wavelet transforms facilitate multiresolution analysis by decomposing signals into subbands at varying scales and orientations, supporting efficient compression and denoising in images while capturing both low- and high-frequency components. In practical examples, satellite image processing employs 2D multidimensional filters for terrain analysis, enhancing multispectral data to delineate landforms and vegetation patterns for environmental monitoring. In biomedical imaging, 2D and 3D multidimensional models aid MRI reconstruction by iteratively applying filters to under-sampled k-space data, improving resolution and reducing acquisition time without compromising diagnostic accuracy.[18][19][20][21]In Control and Other Engineering Fields
In control theory, multidimensional systems are essential for modeling spatially distributed processes where dynamics evolve across multiple spatial dimensions, enabling the design of robust controllers for complex engineering applications. For instance, two-dimensional heat diffusion systems, governed by partial differential equations (PDEs), are approximated using finite difference methods to derive finite-order models suitable for process control, allowing for reduced-order representations that maintain accuracy in temperature regulation tasks.[22] Similarly, boundary control strategies for multidimensional heat equations in parallelepiped domains use Fourier methods and Volterra integral equations to achieve prescribed average temperatures, demonstrating the feasibility of admissible controls via Laplace transforms for industrial heating systems.[23] In radar engineering, multidimensional system models facilitate the control of array antennas, where two-dimensional phased arrays employ mode diversity in few-mode fibers to perform beamforming with true time delays, supporting precise scanning without beam squint in applications like surveillance radars.[24] Extending to other fields, biomedical imaging leverages multidimensional models in X-ray tomography to characterize three-dimensional particle-discrete datasets, enabling quantitative analysis of size, shape, and density distributions in biological tissues through high-resolution micro-CT reconstructions.[25] In satellite communications, two-dimensional uniform planar arrays integrate hybrid beamforming techniques to generate directive beams, optimizing user scheduling and power allocation in multi-antenna systems.[26] Furthermore, PDE-based simulations model multidimensional fluid dynamics, such as two-dimensional Navier-Stokes equations, using physics-informed neural networks to predict turbulence with high fidelity across varying Reynolds numbers.[27] Recent advancements in the 2020s highlight multidimensional control in autonomous vehicles, where three-dimensional sensor fusion combines LiDAR, camera, and radar data in bird's-eye-view representations to enhance object detection, achieving mean average precision scores up to 70.2 on benchmarks like nuScenes for real-time path planning and control.[28] In robotics, integrated dynamics models using dual quaternions govern multi-joint arms, with sliding mode controllers ensuring convergence of position and attitude errors under disturbances, applicable to space manipulation tasks.[29] A key challenge in these applications is handling non-separability in multidimensional feedback loops, where coprime factorizations and Nevanlinna-Pick interpolation address stabilization and H∞ control, overcoming complexities absent in one-dimensional systems.[30]Mathematical Models
Comparison to One-Dimensional Systems
One-dimensional (1D) systems exhibit sequential causality, where the output at any time depends only on current and past inputs due to the total ordering imposed by the time axis. In contrast, multidimensional (m-D) systems lack a global temporal ordering, resulting in partial causality defined relative to the coordinate system; for instance, a two-dimensional (2D) signal is causal if its support is confined to the quarter-plane where both indices are non-negative, allowing computation in a specific direction but without the unidirectional flow of 1D systems. This partial ordering complicates system behavior, as shifts in coordinates or rotations can alter causality properties, unlike the invariant sequential nature in 1D.[7][31] Recursion in 1D systems supports straightforward forward or backward implementations for filters, enabling efficient computation with well-established stability tests like Jury's criterion. In m-D systems, however, recursion demands partitioning the support into recursive and non-recursive regions to mitigate instability, as the absence of total order prevents simple sequential evaluation; for 2D recursive filters, this often involves quarter-plane or fan-shaped frequency partitioning, increasing design complexity and requiring separate stabilization steps not needed in 1D. Seminal work highlighted these challenges by deriving simplified stability conditions for 2D recursive filters, showing that poles must lie outside the unit bidisk without the 1D analog of a single stability circle.[32][33] The z-transform in 1D is a single-variable function facilitating pole-zero analysis via the final value theorem (FVT) and clear regions of convergence (ROC) as annuli. Multidimensional z-transforms extend this to multivariate forms, such as the 2D case X(z_1, z_2) = \sum_{n_1=-\infty}^{\infty} \sum_{n_2=-\infty}^{\infty} x(n_1, n_2) z_1^{-n_1} z_2^{-n_2}, where poles and zeros form surfaces in four-dimensional space rather than isolated points, complicating analysis without a direct FVT equivalent and leading to intersecting loci that do not cancel as in 1D.[34][31] Computationally, the 1D fast Fourier transform (FFT) achieves O(N \log N) complexity for an N-point signal, enabling efficient frequency-domain processing. For m-D signals, the multidimensional FFT separates into 1D transforms along each dimension, yielding O(N^d d \log N) complexity for a d-dimensional array of size N per dimension, which scales exponentially with dimensionality and often requires separable approximations to manage the increased load, as seen in imaging applications where 2D FFTs process pixel arrays twice as costly per dimension compared to 1D sequences.[35]Linear State-Space Models
Linear state-space models provide a time-domain framework for representing linear time-invariant (LTI) multidimensional systems, particularly useful for 2D cases where signals are indexed by two discrete variables (i, j). In these models, the state is often described using local state vectors that account for shifts in different directions, such as horizontal and vertical for 2D systems. This approach extends the classical 1D state-space representation by partitioning the state to reflect the multi-directional dependencies inherent in multidimensional signals.[10] A foundational representation is the Roesser model, introduced for discrete 2D LTI systems in the context of image processing. It employs two local state vectors: x^h(i,j) \in \mathbb{R}^{n_1} for the horizontal direction and x^v(i,j) \in \mathbb{R}^{n_2} for the vertical direction, capturing the propagation along each axis. The model equations are given by: \begin{bmatrix} x^h(i+1,j) \\ x^v(i,j+1) \end{bmatrix} = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix} \begin{bmatrix} x^h(i,j) \\ x^v(i,j) \end{bmatrix} + \begin{bmatrix} B_1 \\ B_2 \end{bmatrix} u(i,j) y(i,j) = \begin{bmatrix} C_1 & C_2 \end{bmatrix} \begin{bmatrix} x^h(i,j) \\ x^v(i,j) \end{bmatrix} + D u(i,j) where u(i,j) is the input, y(i,j) is the output, and the matrices have appropriate dimensions. Boundary conditions are specified as x^h(0,j) = f_1(j) and x^v(i,0) = f_2(i) for i,j \geq 0, enabling the handling of initial states on the axes.[10][36] An alternative formulation is the Fornasini-Marchesini model, which uses a single state vector x(i,j) \in \mathbb{R}^n and combines shifts from both directions in a coupled manner. The second-order version, widely adopted for its structural properties, is defined by: x(i+1,j+1) = A_1 x(i,j+1) + A_2 x(i+1,j) + B u(i,j) y(i,j) = C x(i,j) + D u(i,j) with boundary conditions x(i,0) = f_1(i) and x(0,j) = f_2(j). This model realizes a broader class of 2D filters and allows algebraic analysis of controllability and observability.[37][36] Extensions to n-dimensional (nD) systems generalize these models using tensor notations, where the state is partitioned into n components corresponding to each index direction. For the Roesser-type nD model, the update involves an n-partitioned state vector and a block-structured transition matrix, facilitating analysis in higher dimensions like 3D signal processing. The Fornasini-Marchesini nD variant similarly couples updates across multiple indices, though realizations grow complex with dimensionality. These generalizations maintain the core state-space structure but are primarily conceptual, with practical use limited by computational demands.[38] State-space models like Roesser and Fornasini-Marchesini offer advantages in handling boundary and initial conditions explicitly, which is crucial for multidimensional systems where signals may have non-zero starts along multiple boundaries. They also enable computability through iterative matrix operations, supporting simulations and control design in applications such as filtering and stability analysis. These representations relate to frequency-domain tools via z-transforms but emphasize time-domain dynamics.[36]Multidimensional Transfer Functions
In multidimensional systems, the transfer function provides a frequency-domain representation of the system's input-output relationship, generalizing the one-dimensional z-transform to multiple variables. For a two-dimensional (2D) linear shift-invariant system, the output Y(z_1, z_2) relates to the input X(z_1, z_2) via the transfer function T(z_1, z_2) as Y(z_1, z_2) = T(z_1, z_2) X(z_1, z_2), where the multivariate z-transform is applied.[39][40] For causal quarter-plane filters, common in 2D digital signal processing, the transfer function takes a rational form: T(z_1, z_2) = \frac{\sum_{p=0}^{M} \sum_{q=0}^{N} b_{p,q} z_1^{-p} z_2^{-q}}{1 + \sum_{p=0}^{M} \sum_{q=0}^{N} a_{p,q} z_1^{-p} z_2^{-q}}, where the numerator represents the zero-order autoregressive moving average (ARMA) part and the denominator the feedback terms, with coefficients a_{p,q} and b_{p,q} defining the filter order. This structure ensures causality by restricting support to the quarter-plane where indices are non-negative.[39][41] The transfer function can also be derived from state-space realizations, such as the Roesser model, which partitions the state into horizontal and vertical components. For this model, the transfer function is given by T(z_1, z_2) = C \begin{bmatrix} z_1 I_{n_1} - A_{11} & -A_{12} \\ -A_{21} & z_2 I_{n_2} - A_{22} \end{bmatrix}^{-1} \begin{bmatrix} B_1 \\ B_2 \end{bmatrix}, assuming no direct feedthrough term, where A_{ij}, B_i, and C are system matrices, and n_1, n_2 are state dimensions. This expression links time-domain state evolution to the frequency-domain behavior.[41][42] In the n-dimensional (nD) case, the transfer function generalizes to a ratio of multivariate polynomials in z_1, \dots, z_n, often expressed as T(z_1, \dots, z_n) = \frac{N(z_1, \dots, z_n)}{d(z_1, \dots, z_n)}, where N and d are polynomials derived from state-space models like Fornasini-Marchesini. Computationally, separability assumptions—where polynomials factor into products of lower-dimensional terms—are frequently imposed to reduce complexity, enabling efficient realization via one-dimensional techniques.[43][40] Key properties of multidimensional transfer functions include the absence of a simple zero-location test for stability, unlike the one-dimensional Jury test, necessitating analogs of the bounded-real lemma that involve solving linear matrix inequalities or frequency-domain conditions on the unit polydisk. These properties arise from the coupled nature of multiple variables, complicating pole-zero analysis.[43][44]Realization and Implementation
Structures for 2D Transfer Functions
The realization of two-dimensional (2D) transfer functions involves converting the rational polynomial expression into a state-space model, followed by transformation into practical hardware or software structures such as direct-form or cascade realizations. A common approach starts with the Roesser state-space model, which partitions the state vector into horizontal and vertical components to model recursive dependencies in both dimensions.[45] From this state-space form, direct realizations can be derived by mapping the system matrices to delay elements and adders, while cascade structures decompose the overall transfer function into interconnected first-order sections for modularity.[46] Partial fraction expansion, a standard technique in one dimension for simplifying realizations, faces significant challenges in 2D due to the lack of unique factorization in the ring of polynomials in two variables, often requiring approximations or iterative methods to achieve decomposable forms.[47] Common structures for implementing 2D transfer functions leverage the Roesser model for direct realization, incorporating delay elements that operate independently in the horizontal and vertical directions to handle quarter-plane causality. This model facilitates efficient mapping to digital hardware by representing shifts along each axis with unit delays, enabling parallel processing of state updates.[48] To mitigate computational complexity, especially for higher-order systems, separable approximations are employed, where the 2D transfer function is approximated as a product of one-dimensional factors in the horizontal and vertical variables, reducing the number of multipliers and delays required.[49] These approximations preserve key frequency response characteristics while simplifying synthesis for real-time applications.[50] For infinite impulse response (IIR) structures derived from all-pole transfer functions, feedback loops are configured separately for horizontal and vertical recursions, using the Roesser model's partitioned state transitions to enforce stability and causality without cross-dimensional coupling in the feedback paths.[51] In general cases with both poles and zeros, the structure extends to include feedforward paths that incorporate numerator terms, often realized through parallel combinations of all-pole sections to handle non-minimum-phase behaviors.[52] These IIR configurations are particularly suited for applications requiring sharp frequency selectivity, as the directional feedback minimizes round-off noise compared to fully coupled alternatives.[53] Computational realizations of 2D transfer functions frequently utilize the frequency domain via the two-dimensional discrete Fourier transform (2D DFT), where the input signal is transformed, multiplied pointwise by the filter's frequency response (evaluated from the transfer function), and then inverse-transformed to yield the output.[54] This approach is efficient for block-based processing, as fast algorithms like the 2D FFT reduce complexity from O(N^4) to O(N^2 log N) for an N x N image.[55] For real-time hardware implementation, field-programmable gate arrays (FPGAs) are widely adopted due to their reconfigurability and parallelism, allowing pipelined execution of delay and multiplication operations across multiple processing elements to handle high-throughput 2D filtering tasks such as image enhancement.[56]Finite and Infinite Impulse Response Examples
Finite impulse response (FIR) filters in multidimensional systems, particularly two-dimensional (2D) cases, are characterized by transfer functions that are polynomials in the inverse variables z_1^{-1} and z_2^{-1}, representing all-zero structures with no poles except at the origin. A general 2D FIR transfer function takes the form T(z_1, z_2) = \sum_{p=0}^{M} \sum_{q=0}^{N} b_{p,q} z_1^{-p} z_2^{-q}, where b_{p,q} are the coefficients corresponding to the impulse response h(i,j), which has finite support limited to $0 \leq i \leq M and $0 \leq j \leq N for quarter-plane causality.[57] This non-recursive nature allows for a straightforward state-space realization using the Roesser model, where the system matrix A = 0, the input matrix B incorporates identity components for delay lines in horizontal and vertical directions, and the output is a weighted sum of past inputs stored in the state vector. Such realizations avoid feedback, ensuring inherent stability.[58] A numerical example of a simple 2D low-pass FIR filter uses a 2x2 coefficient matrix to approximate averaging over neighboring samples. The transfer function is T(z_1, z_2) = \frac{1}{4} (1 + z_1^{-1} + z_2^{-1} + z_1^{-1} z_2^{-1}), with coefficients b_{0,0} = 0.25, b_{1,0} = 0.25, b_{0,1} = 0.25, b_{1,1} = 0.25. The impulse response h(i,j) is nonzero only for (i,j) = (0,0), (1,0), (0,1), (1,1), producing a smoothing effect in image processing applications. In state-space form via the Roesser model, the horizontal and vertical states capture one-step delays, with A = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}, B = \begin{bmatrix} 1 \\ 1 \end{bmatrix} (adjusted for input scaling), and output matrix C selecting the weighted contributions, resulting in a minimal order of 2.[39][58] In contrast, infinite impulse response (IIR) filters in 2D systems feature all-pole transfer functions, enabling recursive computation but introducing potential instability. A first-order all-pole example is T(z_1, z_2) = \frac{1}{1 + a_{1,0} z_1^{-1} + a_{0,1} z_2^{-1}}, where the coefficients a_{1,0} and a_{0,1} control attenuation in the horizontal and vertical directions, respectively, with the impulse response h(i,j) extending infinitely in both indices. This can be realized using the first-order Roesser model, partitioning the state into horizontal (x^h) and vertical (x^v) components to handle coupled delays, incorporating feedback terms that reflect the recursive dependencies. For instance, the model equations involve matrices where off-diagonal elements capture the cross-coupling between directions, such as A_{12} linking vertical state to horizontal update and A_{21} for the reverse.[58] FIR realizations are stable by design due to the absence of feedback loops and finite-duration response, though they require higher orders to achieve sharp frequency selectivity, increasing computational demands. IIR structures, while more efficient with lower orders for equivalent performance, risk instability if the poles lie outside the unit polydisk, necessitating careful coefficient selection to satisfy BIBO stability criteria.[57]Analysis and Challenges
Stability Criteria
In multidimensional systems, bounded-input bounded-output (BIBO) stability requires that every bounded input produces a bounded output, which for two-dimensional (2D) discrete systems is equivalent to the impulse response h(i,j) being absolutely summable over the quarter-plane, i.e., \sum_{i=0}^\infty \sum_{j=0}^\infty |h(i,j)| < \infty.[59] Unlike one-dimensional systems, where the Jury test provides a straightforward algebraic check, no simple analogous test exists for 2D BIBO stability, necessitating more complex frequency-domain or state-space approaches.[59] Frequency-domain criteria for stability often involve ensuring that the denominator polynomial of the transfer function has no zeros inside or on the unit polydisk. For 2D polynomials, extensions of the Schur-Cohn test use functional Schur coefficients derived from slice functions along the unit torus, where stability holds if all such coefficients satisfy |\gamma_k(\omega)| \leq 1 for k = 1 to the degree and all \omega on the unit circle.[60] In state-space models, the bounded-real lemma provides a condition for BIBO stability by verifying that the H_\infty-norm of the transfer function satisfies \|T\|_\infty < 1, adapted to 2D Roesser or Fornasini-Marchesini models via lossless bounded-real conditions involving positive definite solutions to Lyapunov-like inequalities. For higher-dimensional (nD) systems, stability tests build recursively on lower-dimensional checks, addressing the exponential increase in complexity. For instance, 3D stability can be assessed by treating the system as a family of 2D systems parameterized by the third variable and verifying 2D stability conditions over the unit circle in that variable.[61] The following table summarizes key nD stability tests:| Dimension | Test Name | Description | Reference |
|---|---|---|---|
| 2D | Huang Test | Checks no zeros in $ | z_1 |
| 2D | Bistritz Tabular Form | Constructs a Jury-like table from polynomial coefficients to test stability without boundary zeros. | [62] |
| 3D | Recursive 2D Slicing | Fixes one variable on the unit circle and applies 2D tests recursively to resulting bivariate polynomials. | [61] |
| nD | Anderson-Jury Extension | Generalizes to nD by ensuring no zeros in the unit polydisk via iterative 1D stability on hyperplanes. | [59] |