Fact-checked by Grok 2 weeks ago

Volterra series

The Volterra series is a mathematical for representing nonlinear dynamical s as an infinite sum of multidimensional integrals, analogous to the expansion but adapted for functionals in infinite-dimensional spaces. It models the output y(t) of a in response to an input x(t) through symmetric kernels h_n(\tau_1, \dots, \tau_n) of order n, where the term reduces to the standard linear for n=1, and higher-order terms capture nonlinear interactions with memory effects. Originating from the work of Italian mathematician in 1887 on the theory of analytic functionals, the series was later adapted for applications by in 1942, who developed orthogonalized variants using Gaussian to facilitate kernel identification. This development enabled practical use in analyzing systems where linear models fail, such as those exhibiting weak nonlinearities, with convergence guaranteed under conditions similar to those for , provided the input remains within a . Volterra series find extensive applications across disciplines, including electrical engineering for behavioral modeling of power amplifiers and predistortion to mitigate nonlinear distortion, control engineering for nonlinear model predictive control, and aerospace engineering for simulating aeroelastic phenomena like flutter and limit cycle oscillations. Key challenges involve estimating higher-order kernels, which grow combinatorially in complexity, often mitigated by sparse representations, basis function expansions like Laguerre polynomials, or reduced-order approximations such as memory polynomials. Despite these hurdles, the series remains a cornerstone for black-box identification of time-invariant nonlinear systems with finite memory.

Introduction

Definition

The Volterra series provides a mathematical for representing the input-output behavior of nonlinear, time-invariant s through an of multidimensional convolutions. Formally, for a with input x(t) and output y(t), the Volterra series is expressed as y(t) = \sum_{n=1}^\infty \int_{\mathbb{R}^n} h_n(\tau_1, \dots, \tau_n) \prod_{i=1}^n x(t - \tau_i) \, d\tau_1 \dots d\tau_n, where the h_n(\tau_1, \dots, \tau_n) are the Volterra kernels of order n, which are multidimensional functions capturing the 's nonlinear memory effects at different time delays \tau_i. The kernels h_n are typically assumed to be symmetric, meaning h_n(\tau_1, \dots, \tau_n) = h_n(\tau_{\sigma(1)}, \dots, \tau_{\sigma(n)}) for any \sigma of the indices, which ensures uniqueness in the and avoids redundancy from input ordering. To enforce this symmetry when starting from potentially asymmetric kernels, a symmetrization operator is applied: \text{sym } h_n(\tau_1, \dots, \tau_n) = \frac{1}{n!} \sum_{\sigma \in S_n} h_n(\tau_{\sigma(1)}, \dots, \tau_{\sigma(n)}), where S_n is the set of all s of n elements; this convention simplifies computations and kernel identification. For the first-order term (n=1), the expression reduces to the standard linear integral: y_1(t) = \int_{\mathbb{R}} h_1(\tau_1) x(t - \tau_1) \, d\tau_1, which describes the system's linear response and serves as the foundation upon which higher-order nonlinear terms build. This functional expansion originated in the work of on nonlinear functionals in 1887, extending linear theory to handle nonlinear dynamics.

Motivation

The Volterra series provides a foundational approach for modeling nonlinear systems by generalizing the to higher-order terms, effectively serving as a expansion for functionals on input signals. This extension allows for the representation of nonlinear input-output mappings in systems where memory effects are present, bridging the gap between theory and more complex nonlinear behaviors without relying on assumptions about the system's structure. A key advantage of the Volterra series lies in its ability to capture fading memory properties inherent in many physical systems, where the impact of distant past inputs on the current output diminishes exponentially, enabling efficient with finite-order terms. It accommodates mild nonlinearities through symmetric multidimensional kernels that do not presuppose specific functional forms, such as polynomials, making it versatile for a broad of time-invariant systems. Furthermore, in black-box modeling scenarios where underlying physical parameters are unknown, the series facilitates direct from input-output data, supporting and without detailed mechanistic knowledge. Despite these strengths, the Volterra series is an infinite expansion that typically requires to a finite for practical , potentially introducing errors in strongly nonlinear regimes. Higher-order terms also lead to significant computational demands due to the exponential growth in kernel dimensions and the need for multidimensional convolutions, limiting its applicability to low-order models in resource-constrained settings.

History

Foundations in functional analysis

The foundations of the Volterra series lie in the early development of and the study of integral equations by (1860–1940), an Italian mathematician whose pioneering work established key concepts for nonlinear systems. In 1896, Volterra introduced integral equations of what became known as the Volterra type, initially focusing on linear forms but soon extending to nonlinear variants that captured dependencies on past values, motivated by problems in such as and . These equations, expressed as f(t) = g(t) + \int_a^t K(t, s, f(s)) \, ds, represented a shift from ordinary differential equations to functionals where the unknown appears inside the integral, laying the groundwork for analyzing nonlinear mappings in infinite-dimensional spaces. Volterra's seminal contributions culminated in his two-volume work, Theory of Functionals and of Integral and Integro-Differential Equations (original Italian edition 1912–1913; English translation 1930), which formalized the of functionals as a of classical to mappings from functions to scalars or functions. In this framework, he developed the notion of successive approximations and series expansions for solving nonlinear functional equations, implicitly introducing expansions that prefigure the Volterra series for representing nonlinear operators. The text emphasized the and inversion of functionals, providing tools for dissecting complex nonlinear behaviors into hierarchical terms based on input history. A central concept emerging from Volterra's analysis is the Volterra operator, viewed retrospectively as a compact on Banach spaces of continuous or integrable functions, such as C[a,b] or L^2, which maps inputs to outputs via kernel convolutions and underpins the of nonlinear functionals as sums of homogeneous operators of increasing degree. This operator-theoretic perspective, rooted in Volterra's , enabled the decomposition of nonlinear systems into multilinear components, influencing later abstract treatments in . Early theoretical advancements included and theorems for solutions to Volterra integral equations, established by Volterra through successive approximations and contraction-like arguments, with further refinements by contemporaries such as Michele Picone in the , who extended these results to boundary value problems and integro-differential forms using variational methods. The transition from solving nonlinear Volterra equations to explicit series representations arose naturally in Volterra's approach to functionals, where iterative substitutions yielded power series-like expansions in terms of multiple integrals, capturing memory effects without assuming differentiability. These implicit expansions, detailed in his treatise, provided the mathematical blueprint for the Volterra series as a tool for nonlinear , bridging pure functional with potential applications in , though Volterra's focus remained on theoretical rigor rather than .

Engineering applications and extensions

The adaptation of Volterra series for engineering applications began prominently in the mid-20th century, building on its mathematical foundations to address practical analysis. In the late 1940s and 1950s, extended the Volterra framework by developing an orthogonal expansion, known as the Wiener series, which reorganized the general non-orthogonal Volterra basis into mutually orthogonal terms suitable for white inputs, facilitating statistical analysis of nonlinear systems in random environments. This orthogonalization highlighted the Volterra series' role as a foundational, more general representation for nonlinear functionals, influencing subsequent tools for and prediction. Key developments in the 1950s and 1960s advanced the practical use of Volterra series for nonlinear system representations. In 1958, L. B. Brilliant provided a rigorous theory for analyzing lumped nonlinear systems using Volterra series, establishing conditions for convergence and applicability to electrical networks, which bridged abstract functional analysis with circuit design. In the early 1970s, E. Bedrosian and S. O. Rice furthered system identification techniques by deriving output autocorrelation functions for Volterra systems under random inputs, enabling practical estimation of kernels in communication and control contexts. Their work demonstrated how Volterra representations could model memory effects in dynamic systems, paving the way for applications beyond static nonlinearities. In the and , Volterra series saw expanded use in domains such as communication channels and control systems, where nonlinear distortions required accurate modeling. Researchers applied Volterra models to characterize in RF communication channels, improving in analog and early digital systems. Concurrently, extensions to control systems leveraged Volterra series for stability analysis and feedback design in nonlinear processes, such as chemical reactors. The introduction of discrete-time Volterra series during this period accommodated , with formulations for finite-memory kernels enabling efficient computation on early computers for applications like cancellation in . Recent trends as of 2023 have integrated Volterra series with techniques for enhanced estimation, particularly in RF modeling. Pruning methods have been applied to reduce Volterra model complexity for nonlinear calibration of , achieving improvements in of around 6 . ridge regression has been employed for and digital predistortion of RF power , providing better accuracy than approaches. Advanced models, such as mixture-of-experts and recurrent architectures, have been developed for power linearization in systems, offering normalized improvements of around 3 over baselines and up to 50% reduction in runtime complexity. These advancements underscore the Volterra series' enduring relevance in high-impact areas like communications, where integration with facilitates scalable nonlinear compensation.

Mathematical formulation

Continuous-time Volterra series

The continuous-time Volterra series provides a functional expansion for representing the input-output behavior of nonlinear dynamical systems in the time domain, generalizing the convolution integral used in linear systems theory. For a causal system with input x(t) and output y(t), the series expresses the output as an infinite sum of multidimensional integrals involving symmetric kernel functions h_n(\tau_1, \dots, \tau_n), where each kernel captures the nth-order nonlinear interaction. The full expansion is given by y(t) = \sum_{n=1}^\infty \int_0^\infty \cdots \int_0^\infty h_n(\tau_1, \dots, \tau_n) \prod_{i=1}^n x(t - \tau_i) \, d\tau_1 \cdots d\tau_n, with the integration limits from 0 to \infty enforcing causality, such that h_n(\tau_1, \dots, \tau_n) = 0 if any \tau_i < 0. This form assumes the kernels are measurable functions or distributions that decay sufficiently fast to ensure integrability. The validity of this expansion relies on key assumptions about the underlying system functional. Specifically, the system must be analytic, meaning it admits a power series representation in the input functional space, often in a neighborhood of the zero input. Additionally, the system exhibits fading memory, where the influence of distant past inputs diminishes, which, combined with bounded inputs in appropriate norms (e.g., L^2 or uniform bounds), guarantees convergence of the series within a radius determined by the growth of the kernel norms: \limsup_{n \to \infty} \|h_n\|^{1/n} < \infty. These conditions ensure the series converges absolutely for inputs with sufficient smoothness and bounded energy, preventing divergence in practical applications like or . The first-order term corresponds to the linear component, reducing to the standard convolution integral: y_1(t) = \int_0^\infty h_1(\tau_1) x(t - \tau_1) \, d\tau_1, which describes the system's response under small-signal approximations. The second-order term introduces quadratic nonlinearities through bilinear interactions: y_2(t) = \int_0^\infty \int_0^\infty h_2(\tau_1, \tau_2) x(t - \tau_1) x(t - \tau_2) \, d\tau_1 \, d\tau_2, capturing effects such as harmonic generation or intermodulation in nonlinear devices. Higher-order terms follow analogously, with increasing complexity in the kernel dimensionality. For frequency-domain analysis, the kernels are transformed using the multidimensional Fourier transform, defined as H_n(j\omega_1, \dots, j\omega_n) = \int_0^\infty \cdots \int_0^\infty h_n(\tau_1, \dots, \tau_n) \exp\left(-j \sum_{i=1}^n \omega_i \tau_i \right) d\tau_1 \cdots d\tau_n, which yields generalized frequency response functions. These allow the output spectrum to be expressed as a sum of products of input spectra weighted by the H_n, facilitating the study of nonlinear distortion and stability in the frequency domain.

Discrete-time Volterra series

The discrete-time Volterra series provides a framework for modeling nonlinear time-invariant systems using sampled signals, making it suitable for digital signal processing and computational implementations. Unlike continuous-time formulations, it replaces integrals with summations over discrete indices, allowing direct application to sequences of input x and output y. This adaptation facilitates analysis of systems where signals are digitized, such as in control engineering and communications. The general form of the discrete-time Volterra series expresses the output as an infinite sum of multidimensional convolutions: y = \sum_{n=1}^{\infty} \sum_{m_1=0}^{\infty} \cdots \sum_{m_n=0}^{\infty} h_n(m_1, \dots, m_n) \prod_{i=1}^n x[k - m_i], where h_n(m_1, \dots, m_n) are the nth-order Volterra kernels, which are symmetric in their arguments for time-invariant systems. In practice, the series is truncated to a finite order N and memory length M (i.e., m_i \leq M) to ensure computational feasibility, as higher-order terms and infinite memory lead to exponential growth in parameters. This finite truncation approximates the full series while capturing dominant nonlinearities in many engineering applications. The discrete-time series relates to its continuous-time counterpart through sampling of the input and output signals, governed by the sampling theorem extended to nonlinear systems. Specifically, the Volterra sampling theorem requires that for an nth-order system, the input bandwidth and kernel frequencies be limited to avoid aliasing in the output; for second-order terms, signals must be bandlimited to less than \pi/2 radians per sample to prevent output frequencies exceeding the Nyquist limit \pi. Aliasing arises because nonlinear interactions can double (or multiply) the frequency content, necessitating oversampling or bandlimiting to preserve accuracy in digital approximations. For frequency-domain analysis, the discrete-time Volterra series generalizes the z-transform to multidimensional forms for the kernels. The nth-order frequency response is given by the multidimensional z-transform H_n(z_1, \dots, z_n), such that the output transform satisfies Y(z_1, \dots, z_n) = H_n(z_1, \dots, z_n) \prod_{j=1}^n X(z_j) for single-input cases, enabling efficient computation of nonlinear distortions via z-domain multiplications. This approach is particularly useful for stability analysis and inverse system design in MIMO configurations. Computationally, low-order truncations (e.g., up to n=3) of the discrete series can be represented in matrix form, where the output vector \mathbf{y} relates to kernel coefficients \mathbf{b} via \mathbf{y} = \boldsymbol{\Phi} \mathbf{b}, with \boldsymbol{\Phi} constructed from input products. This linear-in-parameters structure allows efficient solution using least-squares methods, such as the Moore-Penrose pseudoinverse, reducing the dimensionality from O(M^n) to manageable sizes for real-time applications like power amplifier predistortion.

Theoretical properties

Convergence and existence

The existence of Volterra series representations for nonlinear functionals traces back to Vito Volterra's pioneering work on the theory of functionals in the late 19th century, where he demonstrated that analytic functionals possess unique expansions in terms of polynomials or entire functions, ensuring a formal series representation under suitable regularity conditions. This foundational result guarantees that systems describable by analytic mappings admit a unique Volterra series decomposition, provided the functionals are sufficiently smooth, as extended in subsequent analyses for control systems where the input enters linearly. Convergence criteria for Volterra series are analyzed within the framework of analytic functionals in Banach spaces, where the series acts as a Taylor expansion of operators from L^\infty to L^\infty, with kernels interpreted as bounded measures. The radius of convergence is determined via a majorant series approach, defined as \rho = \left( \limsup_{n \to \infty} \|h_n\|^{1/n} \right)^{-1}, ensuring absolute and uniform convergence for inputs u satisfying \|u\| < \rho. For the series to exist and converge, the kernels must satisfy \limsup_{n \to \infty} \|h_n\|^{1/n} < \infty, with local inverses existing if the first-order kernel is invertible. A key condition ensuring convergence, particularly for time-invariant systems with memory, is the fading memory property, which requires that the kernels decay sufficiently fast: |h_n(\tau_1, \dots, \tau_n)| \leq C / (1 + \sum_{i=1}^n |\tau_i|)^\beta for some constant C > 0 and \beta > n-1. This condition implies that distant past inputs have negligible influence, allowing uniform approximation by finite-order Volterra series over classes of bounded inputs, such as those continuous on \mathbb{R} with less than some value. Error bounds for truncating the infinite Volterra series to a finite order k provide quantitative guarantees for accuracy, with the estimated as \left| \sum_{n=k+1}^\infty \int h_n(\tau_1, \dots, \tau_n) \prod_{i=1}^n u(t - \tau_i) \, d\tau_1 \dots d\tau_n \right| \leq \sum_{n=k+1}^\infty \|h_n\| \|u\|^n = o(\|u\|^k) as k \to \infty for \|u\| < \rho. These bounds highlight the series' utility for practical approximations, where higher-order terms become negligible within the convergence radius.

Kernel symmetries and reduction

Volterra kernels possess an inherent symmetry property arising from the nature of the multilinear functionals in the series expansion. Specifically, the nth-order kernel h_n(\tau_1, \dots, \tau_n) satisfies h_n(\tau_1, \dots, \tau_i, \dots, \tau_j, \dots, \tau_n) = h_n(\tau_1, \dots, \tau_j, \dots, \tau_i, \dots, \tau_n) for any permutation of the time indices i and j, and more generally for all permutations in the symmetric group S_n. This symmetry stems from the fact that the input product \prod_{k=1}^n x(t - \tau_k) in the Volterra integral is itself symmetric under index permutations, allowing the kernel to be represented without loss of generality in a symmetrized form. To exploit this property, the symmetrized kernel is defined as \hat{h}_n(\tau_1, \dots, \tau_n) = \frac{1}{n!} \sum_{\pi \in S_n} h_n(\pi(\tau_1, \dots, \tau_n)), where the sum is over all n! permutations \pi of the arguments. This symmetrization ensures that the kernel is invariant under permutations while preserving the original functional mapping. In discrete-time implementations or parametric estimations, the symmetry reduces the number of independent components significantly; for an nth-order kernel discretized over M time lags, the total parameters drop from (M+1)^n (without symmetry) to \binom{M + n}{n} (for the symmetric case), effectively dividing the parameter count by a factor approaching n! for large M and distinct lags. The reduction factor aligns with the order of the symmetry group S_n, which has n! elements, allowing computation over unique multisets rather than all ordered tuples. This symmetry enables the elimination of redundant integrals in the evaluation of the Volterra series. The nth-order term, originally a multiple integral over the full [0, \infty)^n domain, can be rewritten using the symmetrized kernel as n! \int_{\tau_1 \geq \tau_2 \geq \dots \geq \tau_n \geq 0} \hat{h}_n(\tau_1, \dots, \tau_n) \prod_{k=1}^n x(t - \tau_k) \, d\tau_1 \dots d\tau_n, confining the integration to the ordered region \tau_1 \geq \dots \geq \tau_n, which constitutes $1/n! of the full hypercube volume. This interdependence reduction avoids overcounting equivalent contributions from permuted arguments, streamlining numerical implementations and analytical derivations. For higher-order kernels, the exponential growth in dimensionality—scaling as n for the integral domain and parameters— is substantially mitigated by symmetry exploitation. Without symmetry, the computational burden escalates rapidly with n, but symmetrization bounds the effective complexity, facilitating practical approximations up to moderate orders (e.g., n=3 or $4) in applications like system identification. This efficiency also aids convergence analysis by tightening bounds on symmetric kernel norms, as the symmetrized form can only decrease the gain function and enlarge the radius of convergence compared to unsymmetric representations.

Kernel estimation methods

Crosscorrelation method

The crosscorrelation method is a statistical approach for estimating the first- and second-order kernels of a Volterra series by leveraging correlations between the system's input and output signals under specific excitation conditions. This technique exploits the properties of white noise inputs to isolate contributions from different orders in the series expansion. A fundamental assumption is that the input signal is Gaussian white noise, which ensures decorrelation between different Volterra terms and enables order separation through higher-order statistics; the input must have zero mean and autocorrelation R_{xx}(\tau) = \sigma^2 \delta(\tau), where \sigma^2 is the variance. Non-Gaussian or colored inputs can introduce cross-term interference, compromising accuracy. The procedure begins with computing time-averaged cross-correlation functions from measured input-output data. The first-order cross-correlation is given by R_{yx}(\tau) = \lim_{T \to \infty} \frac{1}{T} \int_0^T y(t) x(t - \tau) \, dt, from which the first-order kernel is estimated as h_1(\tau) = \frac{R_{yx}(\tau)}{R_{xx}(0)}, since R_{xx}(0) = \sigma^2. For the second-order kernel, the relevant higher-order correlation is the second-order input cross-correlation with the output, R_{yxx}(\tau_1, \tau_2) = \lim_{T \to \infty} \frac{1}{T} \int_0^T y(t) x(t - \tau_1) x(t - \tau_2) \, dt. Under Gaussian white noise excitation, this simplifies to yield the symmetric second-order kernel via h_2(\tau_1, \tau_2) = R_{yxx}(\tau_1, \tau_2) / (2 \sigma^4), accounting for the symmetry h_2(\tau_1, \tau_2) = h_2(\tau_2, \tau_1). This method offers simplicity and low computational cost for low-order kernels, facilitating straightforward implementation with standard signal processing tools. However, its efficacy diminishes for higher orders due to the exponential growth in correlation dimensionality and sensitivity to noise or finite data lengths, often requiring approximations or truncation.

Multiple-variance method

The multiple-variance method for estimating leverages the homogeneity property of nonlinear systems, where scaling the input by a factor A causes the nth-order output contribution to scale by A^n. By exciting the system with signals of varying power levels—typically the same base input scaled to produce input variances \sigma_m^2 for m = 1, \dots, M (with M greater than the expected model order)—the method observes how the output variance or moments scale differently across orders: the linear term proportionally to \sigma, the quadratic to \sigma^2, and higher-order terms to higher powers of \sigma. This power-dependent scaling enables the isolation of kernel contributions without requiring orthogonal inputs, making it suitable for practical identification scenarios. Kernel estimation proceeds by computing output moments (such as variance) from each measurement and fitting them to polynomials in the input scaling parameter \sigma (or equivalently, the gain A). The polynomial coefficients directly relate to the Volterra kernels h_n, which are extracted using a least-squares solution on the assembled data; for instance, the rth-order kernel component is obtained as h_{r,i} = \mathbf{e}_r^T (A A^T)^{-1} A \mathbf{d}_{L r,i}, where A is a Vandermonde-like matrix of scaling powers, \mathbf{d}_{L r,i} collects normalized output moments, and \mathbf{e}_r is a selection vector. This fitting approach minimizes the mean square deviation between measured and modeled outputs, providing robust estimates even when the true system order exceeds the model order. The procedure involves generating M excitation signals at distinct power levels, often using orthogonal periodic sequences or white Gaussian noise as the base input for decorrelation benefits, and recording the corresponding system outputs. These responses are then variance-normalized, and the least-squares fit is applied across all measurements to solve for the kernels simultaneously or sequentially (starting from lower orders at low variances to higher orders at increased levels). To optimize performance, gain factors A_m are selected to balance the contributions of different orders, minimizing estimation error through numerical optimization. This method is particularly effective for memoryless systems or those with short memory lengths, where inter-order interference is limited, and it outperforms correlation-based techniques like when inputs are non-white, as the power-scaling principle reduces sensitivity to input statistics beyond variance. In contrast to the , which provides a linear baseline but struggles with nonlinear order separation under colored inputs, the multiple-variance approach uses amplitude variation to disentangle orders more reliably across a range of signal dynamics.

Feedforward neural networks

Feedforward neural networks offer a learning-based approach to approximate and estimate by representing the nonlinear system's input-output mapping through layered architectures. In this method, with polynomial or other nonlinear activation functions are employed to mimic the polynomial structure of Volterra terms, where the kernels are directly mapped to the network's weights and biases. For instance, , a specialized class of three-layer perceptrons using polynomial activations, derive higher-order kernels from the trained weights via explicit formulas, enabling efficient representation of complex nonlinearities. Training these networks involves backpropagation on datasets of input stimuli and corresponding system responses, minimizing the mean squared error to optimize the parameters and yield kernel estimates. This supervised learning process allows the network to adapt to the system's dynamics without requiring prior knowledge of kernel symmetries or orders, with convergence typically achieved in hundreds to thousands of iterations depending on the nonlinearity degree. Applications, such as in radar signal processing for buried object detection, demonstrate that trained MLPs can extract Volterra kernels with over 80% classification accuracy in noisy environments. A primary advantage of this approach is its scalability to high-order Volterra series via additional hidden layers, which compactly model interactions that direct polynomial expansions cannot handle efficiently due to the curse of dimensionality. The architecture inherently enforces kernel symmetries—such as time-invariance and interchangeability—through shared weights in symmetric pathways, reducing parameters and improving generalization. Furthermore, time-delay feedforward networks extend this by incorporating lagged inputs to capture memory effects, allowing kernel extraction with errors below 10% for second- and third-order terms in aerodynamic applications. Hybrid Volterra-neural models enhance this framework by combining Volterra polynomials with neural components for both memory and nonlinearity; for example, Laguerre-Volterra feedforward neural networks use Laguerre filters in the input layer to orthogonalize temporal basis functions, followed by a single hidden layer for nonlinear mapping, significantly reducing model size while maintaining accuracy in high-speed communication link modeling. These hybrids outperform pure in computational efficiency, with training enabling precise kernel estimates for systems exhibiting both fading memory and strong nonlinear coupling.

Exact orthogonal algorithm

The exact orthogonal algorithm provides a precise method for estimating from finite input-output data records of a nonlinear system, extending the principles of identification to arbitrary input signals. Introduced by , , and , the algorithm constructs an orthogonal basis tailored to the specific dataset, enabling direct computation of kernels without reliance on white Gaussian noise assumptions inherent in some traditional techniques. The core of the algorithm involves building orthogonal functions from powers of the input signal, analogous to applying the to tensor products of the input. This begins with the zeroth- and first-order terms, which are already orthogonal in the data inner product space defined by the finite record. Higher-order basis functions are then generated iteratively: each candidate term from the input's p-th power is orthogonalized by subtracting its projections onto all previously constructed basis functions of orders up to p-1, ensuring the new function lies in the subspace orthogonal to lower-order contributions. With the orthogonal basis established, the output signal is projected onto each basis function using the data-defined inner product, yielding the corresponding orthogonal expansion coefficients. These coefficients directly correspond to the Volterra kernels in the transformed domain, as the orthogonality diagonalizes the series and eliminates cross-order interference; the original kernels can then be recovered via back-projection if needed. Mathematically, this relies on the projection theorem in the Hilbert space of square-integrable functions over the finite data, guaranteeing exact recovery of the kernels for noise-free data under ideal conditions, even for non-Gaussian or correlated inputs. The algorithm's design ensures minimal interference between kernel orders, providing superior accuracy compared to methods like crosscorrelation for finite datasets. Kernel symmetries can aid this process by reducing the dimensionality of the basis prior to orthogonalization. Computationally, the procedure scales polynomially with the Volterra series order but exponentially with the input dimension or memory length, owing to the rapid increase in the number of tensor product terms (e.g., binomial coefficients for symmetric kernels); it remains feasible in practice for truncations up to third order, beyond which sparsity or approximations are typically required.

Linear regression

In the context of discrete-time Volterra series, linear regression techniques provide a straightforward approach to estimate the kernels by reformulating the nonlinear model as a linear-in-the-parameters problem through vectorization. The n-th order term of the output is expressed as y_n(t) = \sum_{k_1=0}^{M-1} \cdots \sum_{k_n=0}^{M-1} h_n(k_1, \dots, k_n) \prod_{i=1}^n u(t - k_i), where M is the memory length, h_n is the n-th order kernel, and u(t) is the input. This can be vectorized as \mathbf{y}_n = \Phi_n \mathbf{h}_n, where \mathbf{y}_n is the vector of output contributions at time indices, \Phi_n is the regressor matrix whose rows consist of all possible products of n delayed input samples (accounting for symmetries to reduce redundancy), and \mathbf{h}_n is the vectorized kernel coefficients. This representation allows the use of standard linear estimation methods while preserving the multi-dimensional structure of the kernels. Kernel estimation proceeds sequentially by solving ordinary least-squares (OLS) problems for each order. For the first-order kernel, the linear term is estimated as \hat{\mathbf{h}}_1 = (\Phi_1^T \Phi_1)^{-1} \Phi_1^T \mathbf{y}, where \mathbf{y} is the observed output vector. Higher-order kernels are then fitted to the residuals after subtracting the contributions from lower orders: define the residual \mathbf{r}_{n-1} = \mathbf{y} - \sum_{m=1}^{n-1} \Phi_m \hat{\mathbf{h}}_m, and solve \hat{\mathbf{h}}_n = (\Phi_n^T \Phi_n)^{-1} \Phi_n^T \mathbf{r}_{n-1}. This iterative refinement isolates the nonlinear effects order by order, improving accuracy by reducing interference from dominant lower-order dynamics. The process typically truncates at a finite order N, yielding an approximate model \hat{y}(t) = \sum_{n=1}^N \Phi_n(t) \hat{\mathbf{h}}_n. Such sequential OLS is computationally efficient for moderate memory lengths and has been applied in physiological system modeling, where it effectively captures graded nonlinearities. A key challenge in this approach arises from multicollinearity in the regressor \Phi_n, particularly for higher orders, due to correlations among the input products stemming from non-white or persistent excitations. This leads to ill-conditioned normal matrices \Phi_n^T \Phi_n, amplifying noise and causing unstable estimates. To mitigate this, regularization techniques such as ridge regression are employed, modifying the solution to \hat{\mathbf{h}}_n = (\Phi_n^T \Phi_n + \lambda \mathbf{I})^{-1} \Phi_n^T \mathbf{r}_{n-1}, where \lambda > 0 is a tuning parameter that penalizes large kernel values and stabilizes inversion. More advanced formulations incorporate structured priors, like smoothness constraints on the kernels, via a regularization \mathbf{D} in \hat{\mathbf{h}}_n = (\Phi_n^T \Phi_n + \mathbf{D})^{-1} \Phi_n^T \mathbf{r}_{n-1}, drawing from interpretations to enforce decaying and low-rank kernel structures. These methods have demonstrated improved estimation robustness in short-data scenarios, such as mechanical systems with transient effects.

Kernel methods

Kernel methods for estimating Volterra series leverage reproducing kernel Hilbert spaces (RKHS) to address the challenges of high-dimensional kernel functions inherent in . In this framework, the Volterra kernels are embedded into an RKHS, where the input-output mapping of the system is represented as an element of the space. The reproducing property of the kernel ensures that point evaluations are continuous linear functionals, allowing the Volterra operator to be approximated without explicitly computing the high-order tensors. This approach is particularly suited for discrete-time Volterra series, where the kernels h_n for order n are functions over multi-dimensional domains. The kernel trick enables implicit computation of high-order interactions by replacing dot products in the feature space with kernel evaluations, thus avoiding the exponential growth in parameters associated with direct tensor estimation. Specifically, the Volterra series output is expressed as a linear combination in the RKHS: \hat{y}(t) = \sum_{j=1}^m c_j(t) K(u_j, u), where K is the kernel function, u_j are input samples, and c_j are coefficients determined during estimation. This representation encodes the nonlinear dependencies through the kernel's feature map, facilitating computations in the primal space for efficiency. For Volterra systems, the functional gradients approximating the kernels h_n are derived from the RKHS inner products, incorporating symmetries such as time-invariance to reduce redundancy. Estimation proceeds via , formulated as a regularized least-squares problem in the RKHS: \hat{\alpha} = (K + \gamma I_N)^{-1} y, where K is the with entries K_{ij} = K(u_i, u_j), y is the output , N is the number of samples, and \gamma > 0 is the regularization parameter controlling model complexity via the RKHS norm. The predicted output is then \hat{y}(u) = k(u)^T \hat{\alpha}, with k(u) = [K(u_1, u), \dots, K(u_N, u)]^T. This method approximates the Volterra kernels h_n by expanding the kernel to capture higher-order terms, such as through Taylor-like series in the feature space. The approach extends principles by nonlinearly embedding the inputs, enabling estimation of infinite-degree series in a finite-dimensional manner. A primary advantage of kernel methods is their ability to mitigate the curse of dimensionality, as the effective dimensionality is governed by the 's hyperparameters rather than the order n of the Volterra series, allowing of high-order systems with moderate data. Additionally, the Bayesian interpretation of kernel ridge regression links it to Gaussian processes, providing through the posterior variance, which is valuable for assessing model reliability in applications like . Specific kernels tailored to Volterra structures include multi-dimensional (RBF) kernels for approximations, defined as K(\mathbf{u}, \mathbf{v}) = \exp\left( -\frac{\|\mathbf{u} - \mathbf{v}\|^2}{2\sigma^2} \right) in the multi-index domain, and kernels K(\mathbf{u}, \mathbf{v}) = (\mathbf{u}^T \mathbf{v} + c)^d to encode monomials up to degree d. To exploit Volterra symmetries, multiplicative kernels (MPK) are used, constructed as products of univariate kernels with decay factors, such as the exponentially decaying MPK (SED-MPK), which incorporates prior knowledge on attenuation: K(\mathbf{u}, \mathbf{v}) = \prod_{k=1}^p \exp(-\lambda_k |u_k - v_k|) \cdot (\sum_{k=1}^p u_k v_k + c)^{r_k}. These kernels ensure and invariance, enhancing accuracy.

Differential sampling

Differential sampling is a technique for estimating Volterra kernels by approximating the partial derivatives of the system output with respect to input perturbations at specific time delays, leveraging methods to isolate nonlinear contributions. principle relies on the fact that the nth-order Volterra kernel h_n(\tau_1, \dots, \tau_n) can be approximated as the nth of the output y(t) with respect to scaled input variations \Delta x_i at lags \tau_i, given by h_n(\tau_1, \dots, \tau_n) \approx \frac{\Delta y(t)}{\Delta x_1 \cdots \Delta x_n}, evaluated at particular points where the perturbations are small. This approach stems from the functional expansion underlying the Volterra series, where kernels represent the coefficients of the multivariable expansion for the system's response. In practice, the method involves injecting controlled perturbations into the input signal, such as Dirac delta-like pulses or sinusoidal probes, at targeted lags \tau_1, \dots, \tau_n to elicit measurable output changes \Delta y(t). These probes are applied sequentially or in combinations across multiple input ensembles, with responses averaged over repeated trials to mitigate variations and improve accuracy. For instance, in auditory neuroscience applications, double-pulse stimuli (e.g., clicks separated by varying intervals) serve as probes to capture second-order interactions up to 200 ms, with spike train outputs smoothed and averaged over 100 or more repetitions. The finite differences are then computed by subtracting responses from perturbed ones, yielding estimates at discrete points that can be interpolated for continuous forms. This procedure assumes the system is causal and fading-memory, ensuring convergence of the Volterra expansion. One key advantage of differential sampling is its direct probing capability, which is particularly effective for identifying sparse kernels in systems where nonlinear terms are localized or , avoiding the need for broad-band excitation. It proves valuable in experimental setups, such as physiological recordings, where inputs can be precisely controlled to reveal higher-order dynamics not accessible via linear methods. However, the method is highly sensitive to in the output measurements, necessitating extensive averaging that can prolong experiments and increase data requirements. Additionally, it demands fully controllable inputs, limiting its applicability to black-box or observational systems where perturbations cannot be imposed. Validation often involves cross-checking with correlation-based techniques to confirm symmetry and amplitude.

Applications

Nonlinear system identification

The Volterra series provides a powerful framework for identifying unknown nonlinear dynamical systems from input-output by representing the system's response as a in terms of multidimensional convolutions. The process typically begins with truncating the infinite series to a finite order P, often selected based on prior knowledge of the system's nonlinearity or through iterative testing, to approximate the system's while ensuring computational feasibility. Kernels are then estimated using techniques such as least-squares optimization or kernel-based methods applied to measured , capturing both memory effects and nonlinear interactions. Validation involves simulating the identified model with held-out inputs and assessing the discrepancy between predicted and actual outputs, often through error metrics to confirm the model's predictive accuracy. In , Volterra series have been applied to identify reactors exhibiting nonlinear dynamics, such as processes where input variations like or feed rate lead to asymmetric output responses. For instance, second-order Volterra models were identified for a methyl polymerization reactor using tailored input sequences that minimize operational disruptions while emphasizing key nonlinear parameters, achieving accurate representation of the reactor's input-output relations. In mechanical systems with memory effects, such as , Volterra series facilitate identification of structures like magneto-elastic beams or components modeled by the Bouc-Wen equation. These models capture path-dependent behaviors in vibration responses, as demonstrated in generalized function estimations for seismic nonlinearities, where truncated series approximate the hysteretic and variations effectively. Model performance is evaluated using metrics like (MSE) to quantify the average prediction discrepancy, with typical reductions in MSE by factors of 2-5 reported in validated cases compared to linear approximations. Cross-validation techniques, such as k-fold partitioning of the dataset, are employed to select the optimal truncation order P and prevent bias, ensuring the model's generalization across diverse input conditions. Key challenges in Volterra-based identification include , particularly in high-dimensional systems where the in kernel parameters () leads to excessive model complexity and poor generalization. To mitigate this, regularization strategies like are integrated, balancing fit and . Additionally, hybrid approaches combining Volterra series with physics-based constraints—such as incorporating laws or structural priors into the —address unmodeled dynamics and enhance robustness, as seen in frameworks that simultaneously optimize parametric physical models and data-driven corrections, yielding up to 50% error reductions in benchmarks. methods like those for provide foundational tools for these workflows.

Control engineering

Volterra series are utilized in for nonlinear model predictive control (NMPC), where they provide a functional representation of to predict future states and optimize inputs under constraints. By truncating to low orders, these models enable efficient computation of nonlinear predictions, extending linear MPC frameworks to handle systems with significant nonlinearities, such as chemical processes or mechanical actuators. For example, second-order Volterra models have been employed in NMPC designs, incorporating nonlinear corrections to linear controllers for improved tracking and disturbance rejection. Robust variants address model uncertainties through min-max optimization or tube-based methods, ensuring stability in the presence of bounded disturbances. As of 2025, Volterra-based NMPC continues to be applied in real-time scenarios, including aerospace systems, demonstrating enhanced performance over linear approximations in simulations and experiments.

Aerospace engineering

In aerospace engineering, Volterra series model unsteady aerodynamics and aeroelastic phenomena, such as flutter and limit cycle oscillations (LCOs), by capturing nonlinear fluid-structure interactions from input-output data like angle of attack and lift responses. Reduced-order models using sparse Volterra kernels approximate high-fidelity simulations, enabling faster prediction of stability boundaries and LCO behaviors in aircraft wings or turbomachinery blades. For instance, Volterra series have been applied to simulate post-stall dynamics and LCOs in simplified aircraft models, achieving accurate replication of nonlinear responses observed in wind tunnel tests. These models facilitate control design to suppress flutter and support certification processes by quantifying nonlinearity effects. Recent advancements as of 2025 include parametric Volterra series integrated with machine learning for transonic flutter prediction across design spaces, reducing computational costs while maintaining fidelity to computational fluid dynamics results.

Signal processing and communications

In signal processing and communications, Volterra series are widely employed to model and mitigate nonlinear distortions in RF power amplifiers, where second- and third-order kernels capture and effects that degrade . These kernels enable of amplifier nonlinearities, allowing for the prediction of distortion products under varying input conditions, such as multi-tone excitations, with normalized mean square error (NMSE) improvements in the range of 0.36×10⁻⁴ to 12×10⁻⁴ depending on saturation levels. Volterra series also facilitate equalization in nonlinear communication channels, including fiber optic and wireless systems, by inverting channel distortions through predistorters that compensate for effects like Kerr nonlinearity in optical fibers or fading-induced impairments in links. In coherent dual-polarization optical systems, Volterra nonlinear equalizers (VNLEs) model these distortions with adjustable memory depth and polynomial order, achieving up to 0.35 dB optical signal-to-noise ratio (OSNR) gain at the forward error correction (FEC) limit compared to deep alternatives while reducing by 65%. For channels, Volterra-aided architectures extend this compensation to visible communications, enhancing signal recovery in multipath environments. In systems, -based digital predistorters (DPDs) are integrated to linearize power amplifiers operating near saturation, supporting high-efficiency transmission across wide bandwidths. Post- Volterra DPDs, which account for carrier frequency dependencies, outperform variants by achieving (EVM) values as low as -30 dB with minimal memory depth (e.g., M_pred = 2), reducing nonlinear in multi-frequency scenarios without excessive computational overhead. Audio processing applications leverage Volterra series to emulate generation in nonlinear devices, such as guitar pedals, by deriving kernels from sine sweeps to model low-order distortions accurately. Up to fifth-order kernels enable real-time that replicates content, improving emulation fidelity over linear methods for systems exhibiting weak nonlinearities. Performance gains from Volterra-based inverse modeling are evident in (BER) reductions; for instance, in polarization-multiplexed fiber transmission, frequency-domain Volterra equalizers increase optimum launch power by 1–3 dB over linear methods, yielding BER improvements of 63–85% in single-channel setups and 33% in (WDM) configurations at 100 Gbps rates.

Wiener series

The Wiener series provides an orthogonalized representation of nonlinear systems, extending the Volterra series by reorganizing its homogeneous polynomial terms into a set of uncorrelated functionals. Introduced by in his work on nonlinear processes driven by random inputs, the series expresses the system output as an infinite sum of G-functions: y(t) = \sum_{n=0}^{\infty} G_n[x(t)], where each G_n is a functional of degree n constructed from Volterra operators up to order n, ensuring orthogonality with respect to Gaussian white noise inputs. These G-functions incorporate both leading kernels k^{(p)} and derived terms to subtract lower-order contributions, guaranteeing that the outputs G_m and G_n for m \neq n have zero cross-correlation when the input is a stationary Gaussian process. The relation between the and series lies in their shared foundation on multidimensional convolutions, but the Wiener series achieves by applying a Gram-Schmidt-like procedure to the Volterra operators, transforming homogeneous polynomials into non-homogeneous ones that span the same . Wiener kernels are derived from Volterra kernels using the , which computes the nth-order leading via : k^{(n)}(\sigma_1, \dots, \sigma_n) = \frac{1}{n! A^n} E\left[ y(t) \prod_{i=1}^n x(t - \sigma_i) \right], where A is the input variance and the expectation is over Gaussian inputs, enabling direct estimation without interference from lower orders. This derivation holds specifically for zero-mean Gaussian inputs, under which the Wiener series converges in the mean-square sense for square-integrable outputs. A key advantage of the Wiener series is the diagonality of its kernel matrix due to , which simplifies parameter estimation through independent or least-squares methods, making it particularly suitable for nonlinear filtering in systems with . Unlike the Volterra series, which may require solving coupled equations for kernel , the approach allows sequential estimation starting from the lowest order, reducing in applications like black-box system modeling. Conversion between Volterra and Wiener representations is facilitated by algebraic algorithms: to obtain Wiener kernels from Volterra, apply recursive orthogonalization to project out lower-order terms; conversely, Volterra kernels can be recovered by summing the appropriate combinations of Wiener G-functions of equal total degree, often using reproducing kernel Hilbert space formulations for efficient computation. These transformations preserve the system's input-output mapping while adapting the basis to the input statistics, enhancing applicability in diverse nonlinear scenarios.

Higher-order spectral analysis

Higher-order extends the Volterra series framework into the , where polyspectra serve as multidimensional transforms of higher-order cumulants, capturing nonlinear dependencies beyond second-order statistics. These polyspectra generalize the power spectrum to reveal relations and interactions among multiple components in nonlinear signals. In the Volterra series , the frequency-domain kernels H_n(\omega_1, \dots, \omega_n) are obtained as the multidimensional transforms of the corresponding time-domain Volterra kernels h_n(\tau_1, \dots, \tau_n), providing a direct link between time-domain nonlinearity and spectral characteristics. A key relation arises in third-order analysis, where the B(\omega_1, \omega_2), defined as the of the third-order , connects to the third-order Volterra for detecting nonlinearities: B(\omega_1, \omega_2) = H_3(\omega_1, \omega_2, -\omega_1 - \omega_2). This expression, derived under the assumption of a convergent Volterra series, allows the to isolate contributions from cubic terms in the response, facilitating the of frequency-specific nonlinear distortions. In applications, higher-order spectral analysis via Volterra kernels detects phase coupling in nonlinear signals, such as quadratic interactions where distinct frequencies generate sum or difference harmonics, which the bispectrum quantifies through bicoherence measures. Cumulant-based identification further exploits these polyspectra to estimate Volterra kernels from measured input-output data, proving robust for systems exhibiting non-minimum phase behavior or under non-Gaussian inputs. These methods offer distinct advantages over conventional power , as polyspectra inherently suppress symmetric —since higher-order cumulants vanish for Gaussian processes—while exposing underlying system nonlinearities and preserving true phase information that power spectra obscure. This noise immunity and sensitivity to non-Gaussianity make higher-order spectral tools particularly valuable for analyzing complex, real-world nonlinear dynamics.

References

  1. [1]
    Volterra Model - an overview | ScienceDirect Topics
    In other words, the Volterra series is a power series with memory. The problem of convergence of the Volterra series is similar to that of the Taylor series. In ...
  2. [2]
    Research - Duke Aeroelasticity Group
    The Volterra series is a polynomial based approach capable of progressively approximating nonlinear behavior using an infinite summation of quadratic, cubic, ...
  3. [3]
  4. [4]
    [PDF] Volterra Series: Introduction & Application
    Volterra Series, introduced by Vito Volterra in 1887, is a model for nonlinear behavior, used to calculate distortion terms in transistor amplifiers and ...
  5. [5]
    Volterra and Wiener series - Scholarpedia
    Oct 18, 2011 · Volterra and Wiener series are two classes of polynomial representations of nonlinear systems. They are perhaps the best understood and most widely used ...Representation of nonlinear... · Volterra series · Estimation of Volterra and...
  6. [6]
    [PDF] Analytical Foundations of Volterra Series - Stanford University
    In this paper we carefully study the analysis involved with Volterra series. We address system-theoretic issues ranging from bounds on the gain and ...
  7. [7]
    [PDF] Nonlinear System Theory
    The Volterra/Wiener representation for nonlinear systems is based on the Volterra series functional representation from mathematics. Though it is a mathematical ...
  8. [8]
    [PDF] Lumped Nonlinear System Analysis with Volterra Series. - DTIC
    (e) Some important advantages of the Volterra series method are that it places the input-output relation in explicit form and allows us to think of the ...
  9. [9]
    Regularised Volterra series models for modelling of nonlinear self ...
    May 8, 2023 · Volterra series models are considered an attractive approach for modelling nonlinear aerodynamic forces for bridge decks since they extend the convolution ...
  10. [10]
    [PDF] Computationally Efficient Techniques For Discrete-time Realization ...
    application of Volterra series to systems analysis was by. Norbert Wiener [5,6]. A conceptual advantage of the Volterra series representation for a nonlinear ...
  11. [11]
    Vito Volterra - Biography - MacTutor - University of St Andrews
    Biography. Vito Volterra's parents Abramo Volterra, a cloth merchant, and Angelica Almagià were married on 15 March 1859. The family were of Jewish origins.
  12. [12]
    Vito Volterra
    Volterra was the first to elaborate a general theory of integral equations, investigating the existence and uniqueness of the solutions and arriving at.
  13. [13]
    Theory of Functionals and of Integral and Integro-differential Equations
    Author, Vito Volterra ; Editor, Luigi Fantappiè ; Translated by, Marjorie Long ; Publisher, Blackie & Son Limited, 1930 ; Original from, the University of ...
  14. [14]
    Vito Volterra and the developement of functional analysis 1
    Feb 8, 2019 · The first period: 1887–1903. This period practically contains all the main contributions of Volterra to. functional analysis. In these sixteen ...
  15. [15]
    [PDF] nonlinear system modeling and analysis with applications to ... - DTIC
    Small signal nonlinear circuit analysis may be accomplished by use of the Volterra series. The purpose of this chapter is to describe a general technique for ...
  16. [16]
    Expansions for discrete-time nonlinear systems
    Results are given concerning the existence and other properties of Volterralike input-output expansions for discrete-time systems whose inputs and outputs.Missing: 1970s | Show results with:1970s<|separator|>
  17. [17]
    Methods for Model Complexity Reduction for the Nonlinear ... - MDPI
    Volterra models allow modeling nonlinear dynamical systems, even though they require the estimation of a large number of parameters and have, consequently, ...
  18. [18]
    Behavioral modeling and digital predistortion of RF power amplifiers ...
    The KRR is an advanced machine learning algorithm that can be effectively used for modeling the baseband characteristics of the RFPA considering both the ...Missing: integration | Show results with:integration
  19. [19]
  20. [20]
  21. [21]
    Updating of a Nonlinear Finite Element Model Using Discrete-Time ...
    In this study, the discrete-time Volterra series are used to update parameters in a nonlinear finite element model. The main idea of the Volterra series is to ...
  22. [22]
    [PDF] An Elimination Method of the Nonlinear Distortion in Frequency ...
    The Volterra sampling theorem is defined as, “The input signal x(n) or/and the second-order Volterra kernel h2(klrk2) must be limited to frequency band whi- ch ...
  23. [23]
    The Volterra and Wiener theories of nonlinear systems
    Oct 5, 2022 · The Volterra and Wiener theories of nonlinear systems. by: Schetzen, Martin. Publication date: 1980. Topics: System analysis, Linear operators ...Missing: series | Show results with:series
  24. [24]
  25. [25]
    [PDF] Fading Memory and the Problem of Approximating Nonlinear ...
    While demonstrating that Volterra series operators can, at least in a very weak sense, approximate a general TI causal continuous operator, these results are ...Missing: advantage | Show results with:advantage
  26. [26]
    Improving the approximation ability of Volterra series identified with ...
    Sep 16, 2014 · Volterra series is a polynomial functional series for nonlinear system representation and identification. In [4], Fréchet showed that a sum of a ...
  27. [27]
  28. [28]
  29. [29]
  30. [30]
  31. [31]
    Exact orthogonal kernel estimation from finite data records
    Mar 1, 1988 · Exact orthogonal kernel estimation from finite data records: Extending Wiener's identification of nonlinear systems ... Korenberg, M.J. ...
  32. [32]
    An adaptive evolutionary algorithm for Volterra system identification
    Korenberg (1989) proposed a fast orthogonal approach to select the exact candidates using a modified Gram–Shmidt orthogonalization and Choleskey decomposition.
  33. [33]
    None
    Below is a merged summary of the provided segments on "Linear Regression and Least Squares Estimation of Volterra Kernels," consolidating all information into a single, detailed response. To maximize density and clarity, I will use a table in CSV format to capture the key details across sections, followed by a narrative summary that integrates additional context and details not easily tabularized. This approach ensures all information is retained while maintaining readability.
  34. [34]
    Regularized nonparametric Volterra kernel estimation - ScienceDirect
    The estimation of the Volterra series coefficients in an output error framework can be formulated as a Least Squares (LS) optimization problem. However, the ...
  35. [35]
    [PDF] Efficient Multidimensional Regularization for Volterra Series Estimation
    Due to symmetry, it can be easily shown that the number of coefficients to be estimated for a symmetric Volterra kernel of order 𝑚≥1 is 𝑛N/ = (. 1.
  36. [36]
    Kernel-based methods for Volterra series identification - ScienceDirect
    Positive definite kernels are especially important since they implicitly include a large (possibly infinite) number of basis functions. They also define ...
  37. [37]
    [PDF] Learning Volterra Series via RKHS Methods
    May 10, 2019 · This project presents a Fock space framework for identification and modeling of nonlinear systems by Volterra series.
  38. [38]
    [PDF] infinite degree volterra series estimation
    A major advantage of the method is that the com- putational ... Fading memory and the problem of approximating non- linear operators with Volterra series.
  39. [39]
  40. [40]
    [PDF] Neuronal Correlates of Comodulation Masking Release at the level ...
    4.19 CN: FRA, 2nd Volterra kernel measured with differential sampling and input ... integral kernels of the series can be calculated with a method called ...
  41. [41]
  42. [42]
  43. [43]
  44. [44]
  45. [45]
    [PDF] modeling of power amplifier nonlinearities using volterra series
    Hence, Volterra models are often used to model intermodulation and harmonic distortions in RF. PAs as well as in many other power semiconductor devices. The ...
  46. [46]
    Volterra Equalization for Nonlinearities in Optical Fiber ...
    We propose a novel closed-form time-domain (TD) Volterra series nonlinear equalizer (VSNE) for the mitigation of Kerr-related distortions in polarization- ...
  47. [47]
    Volterra-Aided Neural Network Equalization for Channel Impairment ...
    Nov 10, 2022 · In this paper, we proposed a Volterra-aided DL equalizer for the channel impairment compensation in a VLC system. Benefiting from the customized ...
  48. [48]
    [PDF] Traversing the Volterra series for digital predistortion applications
    least squares (RLS) algorithm (see Sec. 2.3.5) ... kernel estimation, ̂hi. While the RLS online ... pℎi ⟵ an array, initialized to zeros, which ...
  49. [49]
    [PDF] the use of volterra series for simulating the nonlinear
    With a more musical than mathematical approach a nonlinear system can be described as a system that if driven by a pure tone exhibits harmonics with ratios that ...Missing: generation | Show results with:generation
  50. [50]
  51. [51]
    Nonlinear Problems In Random Theory - MIT Press
    A series of lectures on the role of nonlinear processes in physics, mathematics, electrical engineering, physiology, and communication theory.Missing: Vibration DOI
  52. [52]
    Measurement of the Wiener Kernels of a Non-linear System by ...
    A practical and relatively simple method of measuring the Wiener kernels of a non-linear system is presented. The method is based upon cross-correlation ...
  53. [53]
    [PDF] Signal processing with higher-order spectra
    Jul 1, 1993 · The second motivation is based on the fact that polyspectra. (cumulant and moment) preserve the true phase character of signals. For modeling ...
  54. [54]
    [PDF] University of Southampton Faculty of Engineering and Applied ...
    The Volterra series is presented and discussed in some detail. Techniques for calculating a system's Volterra kernels from cross higher order spectra are ...