Fact-checked by Grok 2 weeks ago

Linear time-invariant system

A linear time-invariant (LTI) system is a of a that satisfies both —meaning its response to a sum of inputs is the sum of the individual responses, and scaling an input by a constant scales the output by the same constant—and time-invariance, where a time shift in the input produces an identical time shift in the output without altering the 's behavior. These properties make LTI systems a foundational concept in fields such as , , and , as many physical processes, including mechanical vibrations and electrical circuits, can be approximated by LTI models under small-signal conditions. The linearity property, rooted in the principle of superposition, allows LTI systems to be analyzed using techniques like , where the output y(t) is given by the y(t) = \int_{-\infty}^{\infty} u(\tau) h(t - \tau) \, d\tau, with h(t) as the system's . Time-invariance ensures that the impulse response h(t) remains fixed, enabling efficient frequency-domain representations via the or , where complex exponentials act as eigenfunctions and the system's H(\omega) scales the input and without changing the . In state-space form, continuous-time LTI systems are described by differential equations \dot{x}(t) = A x(t) + B u(t) and y(t) = C x(t) + D u(t), with constant matrices A, B, C, D, facilitating analysis of , , and . LTI systems are widely applied in to model and design filters, amplifiers, and controllers, such as in audio processing where low-pass filters remove high-frequency noise while preserving signal integrity, or in for predicting electromechanical responses. For instance, a spring-mass-damper system follows LTI dynamics m \ddot{x} + c \dot{x} + k x = f(t), allowing of in structures or vehicles. Their computational tractability supports discrete-time implementations in , underpinning technologies like image enhancement and communication systems.

Overview

Definition

A linear time-invariant (LTI) system is a that satisfies both the and time-invariance properties, enabling a complete of its input-output behavior through simple mathematical operations. is defined by the , which encompasses additivity and homogeneity: additivity requires that the system's response to the sum of two inputs equals the sum of the individual responses, while homogeneity stipulates that scaling an input by a constant factor scales the output by the same factor. These properties assume the system maps input functions to output functions without additional constraints like initial conditions affecting the mapping in a non-superposable way. Time-invariance complements by ensuring the system's behavior does not depend on absolute time: if a time-shifted input produces an output, then the same shift applied to that output yields the response to the shifted input. Formally, for an input x(t) producing output y(t), a shifted input x(t - \tau) must produce y(t - \tau) for any delay \tau. This holds for both continuous- and discrete-time systems, presupposing familiarity with signal representations as functions of time. The general input-output relationship for any LTI system is given by convolution of the input with the system's h(t), which fully specifies the system's under these . Unlike nonlinear systems, where superposition fails and lack predictive utility, or time-varying systems, whose parameters change with time and preclude fixed forms, LTI systems allow predictable, time-independent transformations.

Historical context and significance

The mathematical foundations of linear time-invariant (LTI) system theory were established in the late 18th and early 19th centuries by , who developed the around 1809 for solving linear differential equations, and by , who introduced the in 1822, enabling the decomposition of signals into frequency components crucial for LTI analysis. In the late 19th century, British engineer developed as a practical method for solving linear differential equations with constant coefficients, particularly in electromagnetic theory and electrical transmission problems. This approach treated differentiation as an algebraic operator, enabling efficient analysis of systems that exhibit linear responses independent of time shifts, laying early groundwork for modeling dynamic behaviors in physical systems. In the 1940s, American mathematician advanced the field through his work on , introducing concepts of and statistical that highlighted the role of linear systems in and communication processes. Wiener's developments during , including optimal filtering techniques to minimize prediction errors in noisy environments, demonstrated the power of linear models for handling inputs, influencing and . His 1948 book Cybernetics: Or and Communication in the Animal and the Machine formalized these ideas, bridging biological and engineered systems via linear approximations. The formalization of LTI theory in signal processing occurred post-1950s, as digital computation enabled broader applications, with seminal texts like and Ronald W. Schafer's (1975) synthesizing convolution-based representations and frequency-domain methods for analysis and design. This era solidified LTI systems as essential building blocks for modeling physical phenomena, such as electrical circuits through transfer functions, propagation in rooms, and mechanical vibrations in structures like helicopter rotors. These models facilitate frequency-domain analysis, including and Laplace transforms, crucial for and system stability assessment. LTI approximations prove effective for many real-world systems because nonlinear behaviors can be linearized around an using small-signal , where perturbations are minor enough to neglect higher-order terms, yielding time-invariant responses. Additionally, when system parameters vary slowly compared to signal dynamics, time-invariance holds as a valid assumption, simplifying complex phenomena into tractable forms. In modern contexts, LTI principles underpin for applications like audio equalization and , while inspiring architectures such as convolutional neural networks, where operations mimic LTI filtering to extract spatial features from data.

Fundamental properties

Linearity

In linear time-invariant (LTI) systems, the property enables the application of the , whereby the system's response to a of inputs equals the of the responses to each input individually, and scaling an input by a constant factor scales the corresponding output by the same factor. This principle underpins the analysis of complex signals by breaking them down into simpler components whose responses can be computed separately and then combined. Formally, linearity is characterized by two axioms: additivity and homogeneity, expressed using the H, where the output y(t) = H[x(t)] for an input signal x(t). Additivity states that if y_1(t) = H[x_1(t)] and y_2(t) = H[x_2(t)], then H[x_1(t) + x_2(t)] = y_1(t) + y_2(t). Homogeneity requires that for any scalar a, H[a x(t)] = a H[x(t)]. These axioms together imply the full : for scalars a and b, and inputs x_1(t) and x_2(t), \begin{align} H[a x_1(t) + b x_2(t)] &= H[a x_1(t)] + H[b x_2(t)] \quad \text{(by additivity)} \\ &= a H[x_1(t)] + b H[x_2(t)] \quad \text{(by homogeneity)}. \end{align} This derivation shows how additivity and homogeneity combine to yield superposition. A key implication of linearity is that a zero input produces a zero output: setting a = 0 in the homogeneity axiom gives H[0 \cdot x(t)] = 0 \cdot H[x(t)], or H{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0. Furthermore, linearity facilitates the decomposition of arbitrary input signals into sums of basis functions or components, allowing the overall response to be obtained as the corresponding sum of individual responses, which simplifies computational and analytical approaches in signal processing. From a mathematical , the space of input signals (and outputs) can be viewed as a over the real numbers, with addition and defined ; under this interpretation, the system operator H acts as a linear transformation that preserves vector addition and . This framework aligns LTI systems with linear algebra, enabling techniques such as basis expansions for system representation. is complemented by the time-invariance property, which ensures consistent behavior under time shifts.

Time-invariance

A is characterized by the property that any time shift applied to the input signal produces an identical time shift in the output signal. Formally, if the response of the system \mathcal{H} to an input x(t) is the output y(t) = \mathcal{H}\{x(t)\}, then the response to a shifted input x(t - \tau) is y(t - \tau) for any time shift \tau. This property ensures that the system's behavior does not change over time, independent of when the input is applied. The time-invariance property commutes with linearity, meaning that the system's response to time-shifted linear combinations of inputs equals the linear combination of the time-shifted responses. Specifically, if the system satisfies linearity—where \mathcal{H}\{\alpha x_1(t) + \beta x_2(t)\} = \alpha \mathcal{H}\{x_1(t)\} + \beta \mathcal{H}\{x_2(t)\} for scalars \alpha, \beta—then shifting the inputs preserves this superposition: \mathcal{H}\{\alpha x_1(t - \tau) + \beta x_2(t - \tau)\} = \alpha y_1(t - \tau) + \beta y_2(t - \tau). This commutativity underpins the formation of linear time-invariant (LTI) systems, allowing the properties to be analyzed independently yet jointly. In LTI systems, outputs are determined solely by the relative time differences between inputs and outputs, exhibiting no dependence on absolute time or the specific origin of the time axis. This translation invariance implies that the system's operation is consistent regardless of the temporal reference frame, facilitating predictable behavior across different time scales. Unlike LTI systems, time-varying systems incorporate explicit dependence on absolute time, altering their response based on when the input occurs. For instance, the system defined by y(t) = t x(t) is linear, as it satisfies superposition and homogeneity, but it is not time-invariant: applying a shift \tau to the input yields y(t - \tau) = (t - \tau) x(t - \tau), which differs from the shifted output t x(t - \tau). Such systems arise in applications like time-dependent amplifiers or seasonally varying filters, where parameters evolve with time.

Continuous-time LTI systems

Impulse response and convolution

The unit impulse in continuous time is the \delta(t), defined such that \int_{-\infty}^{\infty} \delta(t) \, dt = 1 and \delta(t) = 0 for t \neq 0. This function serves as a fundamental input for characterizing linear time-invariant (LTI) systems. The h(t) of a continuous-time LTI is the output produced when the input is \delta(t). It completely determines the system's behavior for any input, leveraging the and time-invariance properties. Any arbitrary continuous-time input signal x(t) can be expressed as an integral of shifted and scaled unit impulses: x(t) = \int_{-\infty}^{\infty} x(\tau) \delta(t - \tau) \, d\tau. Due to the linearity property, which allows superposition of responses, the output y(t) to this input is the integral of the responses to each individual term x(\tau) \delta(t - \tau). Time-invariance ensures that the response to the shifted impulse \delta(t - \tau) is simply the shifted impulse response h(t - \tau). Therefore, the overall output is y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) \, d\tau, or equivalently, y(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau. This expression, known as the convolution integral, provides the time-domain representation of the system's response and fully characterizes continuous-time LTI systems. The support of the h(t) refers to the set of t where h(t) \neq 0. Systems with impulse responses of finite duration are idealizations, while most physical have responses extending indefinitely, though decaying. A continuous-time LTI is causal if its satisfies h(t) = 0 for all t < 0, meaning the output at time t depends only on current and past inputs. From a computational perspective, the convolution integral can be evaluated numerically using methods like direct integration or fast Fourier transform (FFT) approximations, though analytical solutions are preferred for design. This continuous convolution integral is the foundation for analyzing systems like filters and control loops in the time domain.

Eigenfunctions and response to exponentials

In linear time-invariant (LTI) systems, an eigenfunction is a signal that, when input to the system, produces an output that is a scalar multiple of the input signal itself. For continuous-time LTI systems, complex exponential signals of the form e^{st}, where s is a complex number, serve as such eigenfunctions. Specifically, if the input is x(t) = e^{st}, the output is y(t) = H(s) e^{st}, where H(s) is a complex-valued scalar known as the eigenvalue, which depends on s and characterizes the system's response at that frequency. To demonstrate this property, consider the output of a continuous-time LTI system given by the convolution integral y(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau, where h(t) is the impulse response. Substituting the exponential input x(t) = e^{st} yields y(t) = \int_{-\infty}^{\infty} h(\tau) e^{s(t - \tau)} \, d\tau = e^{st} \int_{-\infty}^{\infty} h(\tau) e^{-s\tau} \, d\tau, assuming the integral converges. The integral defines H(s) = \int_{-\infty}^{\infty} h(\tau) e^{-s\tau} \, d\tau, so y(t) = H(s) e^{st}, confirming that e^{st} is an eigenfunction with eigenvalue H(s). This holds for s = \sigma + j\omega, where \sigma and \omega are real, and the real part \sigma > 0 ensures convergence for growing exponentials, linking directly to the s-plane in Laplace analysis. This eigenfunction property is particularly useful because arbitrary input signals can be decomposed as integrals (or sums) of complex exponentials, allowing the system's output to be expressed as the corresponding integral of scaled exponentials. This decomposition underpins frequency-domain techniques, such as the Fourier and Laplace transforms, for analyzing LTI system responses.

Frequency and Laplace domain analysis

The frequency response of a continuous-time linear time-invariant (LTI) system characterizes its steady-state behavior to sinusoidal inputs and is obtained via the Fourier transform. For an input signal x(t) with Fourier transform X(j\omega), the output y(t) has Fourier transform Y(j\omega) = H(j\omega) X(j\omega), where H(j\omega) is the frequency response, defined as the Fourier transform of the impulse response h(t): H(j\omega) = \int_{-\infty}^{\infty} h(t) e^{-j\omega t} \, dt. This multiplicative property in the frequency domain simplifies analysis of system effects on signal spectra, such as amplification or phase shift at different frequencies \omega. The extends this analysis to a broader class of signals, including those with or , by introducing the complex variable s = \sigma + j\omega. The H(s) is the of h(t): H(s) = \int_{-\infty}^{\infty} h(t) e^{-st} \, dt, and the output transform is Y(s) = H(s) X(s), assuming appropriate regions of convergence (). The bilateral applies to signals over all t, while the unilateral form, \mathcal{L}\{x(t)\} = \int_{0}^{\infty} x(t) e^{-st} \, dt, is used for causal systems with zero initial conditions, facilitating solutions to equations describing LTI dynamics. The , a vertical strip in the s-plane where the integral converges, determines the transform's validity and relates to system stability; for stable causal systems, it includes the imaginary axis \sigma = 0. The recovers the time-domain via the Bromwich integral: h(t) = \frac{1}{2\pi j} \int_{\sigma - j\infty}^{\sigma + j\infty} H(s) e^{st} \, ds, where the lies in the ROC. For practical computation, especially with rational transfer functions H(s) = \frac{P(s)}{Q(s)} (polynomials P and Q with \deg Q \geq \deg P), partial fraction expansion decomposes H(s) into simpler terms: H(s) = \sum_{k} \frac{A_k}{s - p_k} + \sum_{m} \frac{B_m s + C_m}{s^2 + \alpha_m s + \beta_m}, for distinct poles p_k and quadratic factors, allowing inversion term-by-term using standard tables. This method is essential for finding closed-form expressions for h(t) in physical systems like RLC circuits. Pole-zero diagrams visualize rational H(s) by plotting zeros (roots of P(s)) as open circles and poles (roots of Q(s)) as crosses in the s-plane. The diagram reveals key behaviors: proximity of poles to the imaginary axis indicates or risks, while zero locations shape high-frequency . For example, in a second-order H(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2}, poles at -\zeta\omega_n \pm j\omega_n\sqrt{1-\zeta^2} determine and . These plots, combined with partial fractions, enable efficient system design and response prediction without full time-domain simulation.

Examples

A prominent example of a continuous-time LTI system is the RC low-pass filter, consisting of a resistor R and capacitor C in series, with output across the capacitor. The differential equation is RC \dot{y}(t) + y(t) = x(t), and the transfer function is H(s) = \frac{1}{RC s + 1}, with impulse response h(t) = \frac{1}{RC} e^{-t/RC} u(t), where u(t) is the unit step function. This system attenuates high frequencies with cutoff \omega_c = 1/RC. Another example is the ideal integrator, where y(t) = \int_{-\infty}^t x(\tau) \, d\tau, governed by \dot{y}(t) = x(t), with transfer function H(s) = 1/s and impulse response h(t) = u(t). Integrators accumulate signals and are used in control systems but require compensation for stability. The spring-mass-damper system models mechanical vibrations with equation m \ddot{y}(t) + c \dot{y}(t) + k y(t) = x(t), where m is mass, c damping, k stiffness, and x(t) force. The transfer function is H(s) = \frac{1}{m s^2 + c s + k}, and for underdamped cases (c^2 < 4mk), the impulse response involves damped sinusoids, illustrating oscillatory behavior in structures or vehicles. The differentiator, \dot{y}(t) = x(t), has H(s) = s and h(t) = \delta(t), but ideal differentiators amplify noise and are approximated in practice. These examples demonstrate how LTI models apply to electrical and mechanical systems.

Causality and stability

In continuous-time linear time-invariant (LTI) systems, causality refers to the property that the output at any time t depends only on the input values at the current time and past times, not on future inputs. For such systems, this condition is equivalent to the impulse response h(t) being zero for all negative time, i.e., h(t) = 0 for t < 0. This right-sided nature of the impulse response ensures that the system's response to an input signal x(t) up to time t is fully determined by the convolution integral y(t) = \int_{0}^{\infty} h(\tau) x(t - \tau) \, d\tau, without requiring knowledge of future input values. Stability in continuous-time LTI systems is typically analyzed through bounded-input bounded-output (BIBO) stability and asymptotic stability. A system is BIBO stable if every bounded input signal produces a bounded output signal, which holds if and only if the impulse response is absolutely integrable: \int_{-\infty}^{\infty} |h(t)| \, dt < \infty. This condition guarantees that the convolution integral does not amplify bounded inputs indefinitely, as the total contribution from the impulse response remains finite. For rational transfer functions H(s), asymptotic stability—where the system's zero-input response decays to zero as t \to \infty—requires all poles to lie strictly in the left half of the s-plane, meaning the real parts of all poles satisfy \operatorname{Re}(s) < 0. BIBO stability implies asymptotic stability for causal LTI systems, but the converse does not always hold. To assess asymptotic stability without explicitly finding the roots of the characteristic polynomial, the Routh-Hurwitz criterion provides an algebraic procedure. For a polynomial D(s) = \sum_{k=0}^{N} a_k s^k with a_N > 0, the criterion constructs a Routh array and checks that all elements in the first column are positive, ensuring no roots with positive real parts. This method is essential for higher-order systems in control design.

Discrete-time LTI systems

Derivation from continuous-time systems

Discrete-time linear time-invariant (LTI) systems are typically derived from continuous-time LTI systems through the process of sampling, which converts analog signals into discrete sequences while aiming to preserve the underlying linear and time-invariant properties. The sampling operation defines the discrete-time input signal x = x_c(nT), where x_c(t) is the continuous-time signal and T is the fixed sampling period. Similarly, the discrete-time unit impulse \delta arises as the sampled version of the continuous-time Dirac delta function \delta_c(t), scaled appropriately to approximate its sifting property in the discrete domain. This derivation assumes ideal uniform sampling, but practical considerations require adherence to the Nyquist-Shannon sampling theorem to ensure faithful representation. The Nyquist-Shannon sampling theorem states that a continuous-time bandlimited signal with maximum frequency f_{\max} can be perfectly reconstructed from its samples if the sampling frequency f_s = 1/T satisfies f_s \geq 2 f_{\max}, known as the Nyquist rate. Failure to meet this criterion leads to aliasing, where higher-frequency components fold into the lower-frequency band, potentially distorting the system's frequency response and compromising the preservation of LTI characteristics in the discrete domain. The Nyquist criterion, foundational to this theorem, emphasizes the minimum sampling rate needed to avoid such spectral overlap. Several discretization methods transform the continuous-time impulse response h_c(t) or transfer function H(s) into their discrete counterparts while maintaining LTI properties. The impulse invariance method sets the discrete impulse response as h = T h_c(nT) for n \geq 0, ensuring that the discrete system's response to a sampled matches the continuous system's at sampling instants. This approach is particularly useful for IIR but can introduce if h_c(t) is not bandlimited. The provides an alternative by conformally mapping the s-plane to the z-plane via the substitution s = \frac{2}{T} \frac{1 - z^{-1}}{1 + z^{-1}}, yielding H(z) directly from H(s). This method preserves stability for stable continuous systems and avoids through nonlinear frequency warping, though it compresses the axis, often requiring prewarping for critical frequencies. Introduced for analyzing linear systems in terms of , it is a standard technique in digital control and . In control systems, the zero-order hold (ZOH) approximation models the hold operation in digital-to-analog conversion, where the control signal is held constant between samples. The ZOH equivalent discrete-time model exactly reproduces the continuous-time output for piecewise-constant inputs, with the discrete transfer function derived as H(z) = (1 - z^{-1}) \mathcal{Z} \left\{ \frac{H(s)}{s} \right\}, where \mathcal{Z} denotes the z-transform. This method is essential for sampled-data control, ensuring accurate simulation of physical plants interfaced with digital controllers.

Impulse response and convolution

The unit impulse sequence in discrete time, denoted \delta, is defined as \delta = 1 for n = 0 and \delta = 0 otherwise. This sequence serves as a fundamental input for characterizing linear time-invariant (LTI) systems. The impulse response h of a discrete-time LTI system is the output sequence produced when the input is \delta. It completely determines the system's behavior for any input, leveraging the linearity and time-invariance properties. Any arbitrary discrete-time input signal x can be expressed as a linear combination of shifted and scaled unit impulses: x = \sum_{k=-\infty}^{\infty} x \delta[n - k]. Due to the property, which allows superposition of responses, the output y to this input is the sum of the responses to each individual term x \delta[n - k]. Time-invariance ensures that the response to the shifted impulse \delta[n - k] is simply the shifted h[n - k]. Therefore, the overall output is y = \sum_{k=-\infty}^{\infty} x h[n - k], or equivalently, y = \sum_{k=-\infty}^{\infty} h x[n - k]. This expression, known as the convolution sum, provides the time-domain representation of the system's response and fully characterizes discrete-time LTI systems. The support of the impulse response h refers to the set of n where h \neq 0. Systems with finite support are classified as finite impulse response (FIR) systems, where h is nonzero only over a finite range of n. In contrast, infinite impulse response (IIR) systems have impulse responses with infinite support, extending indefinitely in at least one direction. A discrete-time LTI system is causal if its impulse response satisfies h = 0 for all n < 0, meaning the output at time n depends only on current and past inputs. From a computational perspective, FIR systems are implemented via direct evaluation of the finite sum, requiring a fixed number of multiplications and additions per output sample. IIR systems, however, typically use recursive difference equations derived from h, enabling efficient computation despite the infinite support, though they may introduce feedback that affects . This discrete convolution sum is analogous to the continuous-time used for LTI systems in that domain.

Eigenfunctions and z-domain analysis

In discrete-time linear time-invariant (LTI) systems, sequences of the form x = z^n, where z is a complex constant, serve as eigenfunctions. The output corresponding to such an input is y = H(z) z^n, with H(z) denoting the system's transfer function evaluated at z. This property arises because the convolution operation defining the system's response multiplies the input by a scalar factor H(z) when the input is an eigenfunction. The provides a frequency-domain for discrete-time signals and systems, enabling analysis of LTI systems through algebraic manipulation. For an input signal x and output y, their z-transforms satisfy Y(z) = H(z) X(z), where H(z) is the . The unilateral z-transform of a y is defined as Y(z) = \sum_{n=0}^{\infty} y z^{-n}, with the bilateral form extending the sum from n = -\infty to \infty; occurs within a region of convergence (ROC) in the complex z-plane, which depends on the signal's properties such as or duration. The (DTFT) emerges as a special case of the evaluated on the unit circle in the z-plane, where |z| = 1 or z = e^{j\omega}. Thus, the of the system is H(e^{j\omega}), which characterizes the system's steady-state response to sinusoidal inputs and relates directly to the and alterations at frequency \omega. This connection allows the to generalize the DTFT for broader beyond the unit circle. The H(z) is typically a expressed as H(z) = \frac{\sum_{k=0}^{M} b_k z^{-k}}{1 + \sum_{k=1}^{N} a_k z^{-k}}, with zeros at the roots of the numerator and poles at the roots of the denominator. These poles and zeros are plotted in the complex z-plane, providing geometric insight into system behavior; for instance, zeros attenuate specific components, while poles amplify them. The ROC must exclude poles to ensure . To recover the time-domain sequence from its z-transform, the inverse z-transform employs a contour integral over a closed path C in the ROC: y = \frac{1}{2\pi j} \oint_C Y(z) z^{n-1} \, dz, where the integral is taken counterclockwise. This formulation leverages , such as the , to compute the inverse for rational functions by summing residues at poles inside C.

Examples

A prominent example of a discrete-time LTI system is the (FIR) filter, which computes each output sample as a finite weighted sum of current and past input samples. The difference equation for an FIR filter of length M+1 is given by y = \sum_{k=0}^{M} b_k x[n-k], where b_k are the filter coefficients and the h is finite in duration, specifically h = b_n for $0 \leq n \leq M and zero otherwise. This structure ensures the system is inherently and can implement linear-phase filtering when coefficients are symmetric. In contrast, the (IIR) filter incorporates , relying on both past inputs and past outputs to produce the current output, resulting in a potentially infinite-duration . The general form of the difference equation for an IIR filter is y = \sum_{k=0}^{M} b_k x[n-k] - \sum_{k=1}^{N} a_k y[n-k], where a_k and b_k are coefficients, and the system function in the z-domain is a H(z) = \frac{\sum_{k=0}^{M} b_k z^{-k}}{1 + \sum_{k=1}^{N} a_k z^{-k}}. IIR filters achieve sharper responses with fewer coefficients compared to FIR filters but require careful design to ensure stability. A specific instance of an FIR filter is the moving average filter, which smooths a signal by averaging a fixed number of consecutive input samples. For an N-point , the is h = \frac{1}{N} for $0 \leq n < N and zero elsewhere, yielding the output y = \frac{1}{N} \sum_{k=0}^{N-1} x[n-k]. This filter attenuates high-frequency components, acting as a low-pass , and its output can be computed via the sum with the input. The unit delay system represents a basic building block for more complex discrete-time LTI systems, simply shifting the input sequence by one sample. Its difference equation is y = x[n-1], with the z-domain H(z) = z^{-1}. This system introduces a shift proportional to without altering amplitudes, and its is h = \delta[n-1].

Causality and stability

In discrete-time linear time-invariant (LTI) systems, refers to the property that the output at any time n depends only on the input values at the current time and past times, not on future inputs. For such systems, this condition is equivalent to the h being zero for all negative time indices, i.e., h = 0 for n < 0. This right-sided nature of the ensures that the system's response to an input sequence x up to time n is fully determined by the convolution sum y = \sum_{k=0}^{\infty} h x[n-k], without requiring knowledge of future input values. Stability in discrete-time LTI systems is typically analyzed through two key concepts: bounded-input bounded-output ( and asymptotic stability. A system is BIBO stable if every bounded input sequence produces a bounded output sequence, which holds the is absolutely summable: \sum_{n=-\infty}^{\infty} |h| < \infty. This condition guarantees that the integral does not amplify bounded inputs indefinitely, as the total contribution from the remains finite. For rational transfer functions H(z), asymptotic stability—where the system's zero-input response decays to zero as n \to \infty—requires all poles to lie strictly inside the unit circle in the z-plane, meaning the magnitudes of all poles satisfy |z| < 1. BIBO stability implies asymptotic stability for causal LTI systems, but the converse does not always hold. To assess asymptotic stability without explicitly finding the roots of the characteristic polynomial, the Jury stability test offers an algebraic procedure analogous to the Routh-Hurwitz criterion for continuous-time systems. Developed by E.I. Jury, the test constructs a tabular array from the coefficients of the polynomial D(z) = \sum_{k=0}^{N} a_k z^k and checks a set of inequalities to verify that all roots have magnitude less than one. For a polynomial of degree N, the table is built iteratively by computing determinants of submatrices, and stability is confirmed if the leading coefficients satisfy specific sign and magnitude conditions, such as |a_0| < a_N and positivity of certain table entries. This method is particularly useful for higher-order systems where root-finding algorithms are computationally intensive.

Applications and extensions

In signal processing and control theory

In signal processing, linear time-invariant (LTI) systems form the foundation for designing filters that selectively attenuate or amplify specific components of a signal. Low-pass filters, which allow low-frequency signals to pass while suppressing higher frequencies, are commonly used for noise removal in applications such as audio smoothing or image denoising, as they model the system's response through with an that preserves desirable signal content. High-pass filters, conversely, eliminate low-frequency noise or offsets, enabling in signals or component removal, and both types leverage the time-invariance property to ensure consistent performance across signal durations. These filters are analyzed in the using Fourier transforms, where the system's directly multiplies the input to yield the output, facilitating efficient and . To accelerate the operation central to LTI systems, the (FFT) is employed, converting time-domain into frequency-domain , which reduces from O(N²) to O(N log N) for large signals. This technique is pivotal in real-time tasks, such as implementing long filters without excessive latency. In , LTI systems are integral to feedback loops, where transfer functions describe the relationship between input commands and output responses, enabling stability analysis via tools like Bode plots. These loops use LTI models to predict system behavior under closed-loop operation, ensuring controlled dynamics in processes like motor speed regulation. Proportional-integral-derivative (PID) controllers approximate LTI systems by combining proportional, integral, and derivative actions into a linear transfer function, providing robust performance for setpoint tracking in industrial automation despite minor nonlinearities in real plants. In audio processing, equalizers function as LTI systems by applying frequency-selective gains to balance spectral content, such as boosting bass or cutting harsh highs in music signals, often realized through cascaded biquad filters that maintain coherence. Digital implementations of LTI systems rely on specialized (DSP) chips, which execute and filtering algorithms at high speeds using fixed-point or optimized for operations in devices like smartphones or hearing aids. These chips, such as those from , handle the matrix operations inherent to LTI transforms with low power consumption, supporting applications from cancellation to . Despite their utility, LTI systems face limitations in nonlinear regimes, where real-world phenomena like or violate assumptions, leading to distorted outputs and inaccurate predictions that require or adaptive extensions for robustness. In such cases, LTI approximations hold only near operating points, necessitating strategies to maintain performance under varying conditions.

State-space representations

State-space representations provide a time-domain framework for modeling linear time-invariant (LTI) systems using first-order differential or difference equations that describe the evolution of internal state variables. For continuous-time LTI systems, the standard form consists of the state equation \dot{x}(t) = A x(t) + B u(t) and the output equation y(t) = C x(t) + D u(t), where x(t) \in \mathbb{R}^n is the state vector, u(t) \in \mathbb{R}^r is the input vector, y(t) \in \mathbb{R}^m is the output vector, and A, B, C, D are constant matrices of appropriate dimensions. In the discrete-time case, the representation is given by the state update x[n+1] = A x + B u and output y = C x + D u, where n denotes the discrete time index. This state-space form is equivalent to the transfer function representation obtained via the for continuous-time systems or the for discrete-time systems. Specifically, the is H(s) = C (sI - A)^{-1} B + D, which connects the input-output behavior to the state . For discrete-time systems, the analogous form is H(z) = C (zI - A)^{-1} B + D. One key advantage of state-space representations is their natural handling of multi-input multi-output (MIMO) systems, where multiple inputs and outputs are described through matrix coefficients without requiring separate transfer functions for each pair. Additionally, these models facilitate analysis of internal system properties such as and . refers to the ability to drive the from any initial value to any final value in finite time using the input, while means that the initial can be determined from the input and output over a finite interval. The Kalman decomposition theorem partitions the space into four subspaces—controllable and observable, controllable but , observable but uncontrollable, and neither—via a , enabling structured analysis of system behavior. In realization theory, state-space models are constructed from transfer functions or impulse responses, with a minimal realization defined as the lowest-order model that is both controllable and . All minimal realizations of a given are related by similarity transformations and share the same , equal to the of the for SISO systems or the rank of the for systems. For non-diagonalizable state matrices A, the canonical form is used to represent the system, where A is block-diagonal with Jordan blocks corresponding to eigenvalues, allowing computation of solutions via matrix exponentials or powers even when algebraic and geometric multiplicities differ.

References

  1. [1]
    Linear Time-Invariant (LTI) Systems - University of California, Berkeley
    Given a sinusoid at the input, the output of the LTI system will be a sinusoid with the same frequency, although possibly a different phase and amplitude.
  2. [2]
    [PDF] 2 LINEAR SYSTEMS - MIT OpenCourseWare
    A dynamic system is time-invariant if shifting the input on the time axis leads to an equivalent shifting of the output along the time axis, with no other ...
  3. [3]
    [PDF] Linear Time-invariant Systems - Purdue Engineering
    First, many physical processes possess these properties and thus can be modeled as linear time-invariant (LTI) systems.
  4. [4]
    [PDF] Linear Time-Invariant Dynamical Systems - Duke People
    A system G that maps an input u(t) to an output y(t) is a time-invariant. system if and only if. y(t − to) = G[u(t − to)] .
  5. [5]
    [PDF] LINEAR TIME-INVARIANT SYSTEMS AND THEIR FREQUENCY ...
    So just what is a Linear Time-Invariant (LTI) system, and why should you care? ... LTI systems cannot change frequencies. C. Examples. Example: x[n] = cos(π. 2 n) ...
  6. [6]
    [PDF] Chapter 3: Basics of Systems
    Specifically, the output of an LTI system is simply the input convolved with the impulse response. However, in addition to this interpretation, the convolution ...
  7. [7]
    [PDF] Linear Systems
    LINEARITY. Linear systems obey the superposition principle, which consists of two properties: Homogeneity and additivity. • Homogeneity: If we increase the ...
  8. [8]
    Linear Systems Theory
    Superposition: Systems that satisfy both homogeneity and additivity are considered to be linear systems. These two rules, taken together, are often referred ...Missing: definition | Show results with:definition
  9. [9]
    [PDF] Signals and Systems - Purdue Engineering
    In continuous time with y(t) the output corresponding to the input x(t), a time-invariant system will have y (t - to) as the output when x (t - to) is the input ...
  10. [10]
    [PDF] Master of Information and Data Science DATASCI 281 - Hany Farid
    A system, T{·} that maps f(x) onto g(x) is shift- or time-invariant if a shift in the input causes a similar shift in the output: g(x) = T{f(x)}. =⇒ g(x ...
  11. [11]
    Filters and Convolution
    $$ y$ as the output signal, and $ h$ as the digital ... In other words, every LTI system has a convolution representation in terms of its impulse response.
  12. [12]
    [PDF] ECE503: Digital Signal Processing Lecture 2 - spinlab
    Jan 23, 2012 · – Nonlinear systems have an impulse response, but it isn't useful. + Can describe time-invariant and time-varying systems. – No explicit ...
  13. [13]
    The Heaviside operational calculus - Project Euclid
    originated and developed by Oliver Heaviside, for the solu- tion of systems of linear differential equations with constant coefficients, and linear partial ...
  14. [14]
    Oliver Heaviside: The Self-taught Pioneer of Electromagnetism and ...
    Jan 21, 2023 · Heaviside is credited with bringing complex numbers to circuit analysis, creating a new technique to solve differential equations, single- ...
  15. [15]
    Prodigy of probability - MIT News
    Jan 19, 2011 · Norbert Wiener, the MIT mathematician best known as the father of cybernetics, whose work had important implications for control theory and signal processing.
  16. [16]
    [PDF] Cybernetics: - or Control and Communication In the Animal - Uberty
    Norbert Wiener. Page 2. CYBERNETICS or control and communication in the animal and the machine. NORBERT WIENER second edition. THE M.I.T. PRESS. Cambridge ...
  17. [17]
    [PDF] An Introduction to Digital Signal Processing - River Publishers
    LTI systems are easy to model and analyze. The input output relationship can ... Digital signal processing started in the early 1950s out of a differ-.
  18. [18]
    [PDF] A High-Order, Linear Time-Invariant Model for Application to Higher ...
    Helicopters can experience high vibration levels, which reduce passenger comfort and cause progressive damage to the aircraft structure and on-board equipment.<|separator|>
  19. [19]
    [PDF] Signals, Systems and Inference, Chapter 2 - MIT OpenCourseWare
    Linear, time-invariant (LTI) systems form the basis for engineering design in many situations. They have the advantage that there is a rich and well-established ...
  20. [20]
    Small-Signal Analysis - MATLAB & Simulink - MathWorks
    Small-signal analysis approximates the behavior of a nonlinear power electronics system, such as a switched-mode power supply, with a linear time-invariant ...
  21. [21]
    [PDF] K. Tsakalis and P. Ioannou, Linear Time-Varying Systems
    Time dependence may manifest itself as small in magnitude time- varying perturbations of the nominal parameters of a linear time-invariant model.Missing: difference | Show results with:difference
  22. [22]
    Why Linear Time-Invariant Systems Still Rule the World
    Sep 3, 2025 · From Spotify equalizers to climate models, one deceptively simple idea shapes modern life—whether or not you ever took a signals course ...Missing: applications | Show results with:applications
  23. [23]
    [PDF] 3 Signals and Systems: Part II - MIT OpenCourseWare
    Signals and Systems: Part II. In addition to the sinusoidal and exponential ... invertibility, causality, stability, time invariance, and linearity. The ...
  24. [24]
    [PDF] Linear Time-Invariant Dynamical Systems - Duke People
    Oct 6, 2020 · A linear system maps input to output, and a time-invariant system's output at t-to equals G[u(t-to)]. Systems described by ˙x(t) = Ax(t) + Bu(t ...Missing: applications | Show results with:applications
  25. [25]
    [PDF] EEE 303 Notes: System properties
    Jan 27, 2000 · 3 Time-Invariance. 3.1 De¯nition: A system H is said to be time-invariant if it commutes with the shift operator: HTt0 = Tt0 H, for all t0 ...
  26. [26]
    [PDF] 16.30/16.31 Feedback Control Systems, Recitation 3
    A system is time-invariant if it commutes with a time delay, i.e., if the output of the system does not depend on the “origin” of the time ...
  27. [27]
    [PDF] Signals and Systems for All Electrical Engineers
    is time-invariant if the input x1(t) = x0(t − t0) results in the output y1 ... For instance, the systems y(t) = tx(t) and y(t) = x(t) + t are not time ...
  28. [28]
    [PDF] Complex Exponentials are Eigenfunctions of LTI Systems
    For input and discrete LTI system then output. To prove it, simply do the continuous time: Example 1.7. Given the differential equation. Then ! " giving.
  29. [29]
    [PDF] The Continuous-Time Fourier Transform - Purdue Engineering
    that a stable LTI system has a frequency response H(jw). In using Fourier analysis to study LTI systems, we will be restricting ourselves to systems whose ...
  30. [30]
    [PDF] Chapter 4: Frequency Domain and Fourier Transforms
    Frequency domain analysis and Fourier transforms are a cornerstone of signal and system analysis. These ideas are also one of the conceptual pillars within.Missing: acoustics | Show results with:acoustics
  31. [31]
    [PDF] LTI System Analysis with the Laplace Transform
    Easy to determine using the transfer function. System is BIBO stable if all the poles of the transfer function lie in the open left half of the s plane. The ...
  32. [32]
    [PDF] The Laplace Transform 18.031, Haynes Miller and Jeremy Orloff 1 ...
    It allows us to easily compute the transfer function for LTI systems other than. P(D)x = Q(D)f. In particular, it will do this for systems with delay.
  33. [33]
    [PDF] 4.3 Laplace Transform in Linear System Analysis
    Using the Laplace transform as a method for solving differential equations that represent dynamics of linear time invariant systems can be done in a straight ...
  34. [34]
    [PDF] the laplace transform ii
    LTI est. H(s)est x(t) y(t). H(s). LTI. )( sX. )( sY. Transfer Function Representation. • H(s) is the transfer function when the system is linear and time.
  35. [35]
    [PDF] Understanding Poles and Zeros 1 System Poles and Zeros - MIT
    The transfer function at any value of s may therefore be determined geometrically from the pole-zero plot, except for the overall “gain” factor K. The magnitude ...
  36. [36]
    [PDF] Transfer Functions - Graduate Degree in Control + Dynamical Systems
    Zeros of the transfer function thus blocks the transmission of the corresponding exponential signals. The poles of the transfer function are the eigenvalues of ...
  37. [37]
    [PDF] Pole Diagrams 18.031 Haynes Miller and Jeremy Orloff 1 ...
    For the LTI system P(D)x = f with input f(t) and response x(t), the transfer function is. W(s)=1/P(s). In this case the poles of W(s) are simply the zeros ...
  38. [38]
    [PDF] Ch. 8: IIR Filters • Difference equation • System function
    Dec 8, 2002 · IIR filters are recursive systems that depend on both current and past inputs and outputs, and have an infinitely long impulse response.
  39. [39]
    [PDF] Discrete - Time Signals and Systems Z-Transform & IIR Filters 1
    Unlike FIR IIR filter transfer function is a ratio of polyn. b b z a. B z numerator polynomial is defined by the weight. Y z. B z. H z. X ing coefficie omia nts ...
  40. [40]
    [PDF] 2.161 Signal Processing: Continuous and Discrete
    Such a filter is a non-recursive filter with a finite-impulse-response (FIR), and is known as a moving average (MA) filter, or an all-zero filter. • If bi ≡ 0 ...
  41. [41]
    [PDF] Discrete-Time Signals and Systems
    Any signal can be expressed as a sum of an even signal and an odd signal. x(n) = xe(n) + xo(n). Proof. xe(n) = 1. 2. [x(n) + x(−n)] and xo(n) =.
  42. [42]
    [PDF] Discrete-Time Systems: Examples
    The output y[n] at time instant n is the sum of the input sample x[n] at time instant n and the previous output at time instant which is the sum of all.
  43. [43]
    [PDF] Review of Discrete-Time Signals and Systems - Henry Pfister
    Definition 13 (Causality). A DT LTI system is causal if and only if its unit impulse response h[n] = 0 for all n < 0. Proof. A linear system x[n] → y[n] is ...
  44. [44]
    [PDF] Stability Condition of an LTI Discrete-Time System
    An LTI discrete-time system is causal if and only if its impulse response {h ... • Examples of FIR LTI discrete-time systems are the moving-average ...
  45. [45]
    [PDF] Stability of Discrete-time systems - F.L. Lewis
    Oct 7, 2008 · Discrete-time systems are stable if poles are inside the unit circle. Asymptotically stable if poles are strictly inside, and marginally stable ...
  46. [46]
    [PDF] A Simplified Stability Criterion for Linear Discrete Systems - DTIC
    The study in this paper represents a major simplification of the earlier work, where it is shown that only half the number of determinants are required for ...
  47. [47]
    [PDF] Jury's test This is an algebraic test, similar in form to the Routh
    This is an algebraic test, similar in form to the Routh - Hurwitz approach, that determines whether the roots of a polynomial lie within the unit circle. As for ...
  48. [48]
    [PDF] signals & systems
    ... linear time-invariant system with impulse response h0(t). We are told that when the input is x0(t) the output is y0(t), which is sketched in Figure. P2.47 ...
  49. [49]
    [PDF] MT-002: What the Nyquist Criterion Means to Your ... - Analog Devices
    This tutorial explains in easy to understand terms how the. Nyquist criterion applies to baseband sampling , undersampling, and oversampling applications. A ...
  50. [50]
    [PDF] discrete-time signal processing - INAOE
    Discrete-time signal processing / Alan V. Oppenheim, Ronald W. Schafer, with John R. Buck. 2nd ed. p. cm.
  51. [51]
    [PDF] Lecture Notes 7 - ECEN 314: Signals and Systems
    complex exponential x[n] = zn is an eigenfunction of a DT LTI system because this input produces the input multiplied the eigenvalue H(z) for any DT LTI system.
  52. [52]
    [PDF] Digital Signal Processing Lecture 3 - Discrete-Time Systems
    Eigenfunctions for LTI systems. Complex exponentials are eigenfunctions of LTI systems and the frequency response is the eigenvalue of LTI systems. Let x[n] ...
  53. [53]
    [PDF] The z-Transform - Purdue Engineering
    As with the Laplace transform, this range of values is referred to as the region of convergence. (ROC). If the ROC includes the unit circle, then the Fourier ...
  54. [54]
    [PDF] The z-Transform - definition - ECE352: Signals and Systems II
    The range of r for which this condition is satisfied is termed the region of convergence (ROC). • Complex number z is represented as a location in a complex.
  55. [55]
    Part I 1 Relation to Discrete-Time Fourier Transform - MIT
    Simple substitution finds the Z-transform for a discrete system represented by a linear constant coefficient difference equation (LCCDE).<|control11|><|separator|>
  56. [56]
    [PDF] Lecture 1: Reviewing DTFT and ζ-transform - People @EECS
    The z-transform generalizes the DTFT by replacing the complex number ejω with general complex number z ∈ C. Furthermore, this convergence will be uniform on ...
  57. [57]
    Pole-Zero plot - Theory/Equations
    A plot of Pole and Zeros of a system on the z-plane is called a Pole-Zero plot. Usually, a Zero is represented by a 'o'(small-circle) and a pole by a 'x'(cross) ...
  58. [58]
    [PDF] Lecture 06 The inverse z-transform - MIT OpenCourseWare
    Listed below are three z-transforms. For each determine the inverse z-transform by using contour integration, and again by using a partial fraction expansion. ( ...
  59. [59]
    [PDF] The Inverse z-Transform Using Contour Integration - Sec. 4.5
    Therefore, the inverse z-transform relation is given by the contour integral x[n] = X(z)z"-1 dz,. 2πj. (4.67) where C is a counterclockwise closed contour in ...
  60. [60]
    [PDF] 326 Chapter 5 Frequency-Domain Analysis of LTI Systems
    Mar 5, 2025 · The term filter is commonly used to describe a device that discriminates, according to some attribute of the objects applied at its input, what ...
  61. [61]
    Linear Time-Invariant Digital Filters - DSPRelated.com
    In this chapter, the important concepts of linearity and time-invariance (LTI) are discussed. Only LTI filters can be subjected to frequency-domain analysis.
  62. [62]
    Why does FFT accelerate the calculation involved in convolution?
    Jan 30, 2013 · FFT speeds up convolution for large enough filters, because convolution requires N multiplications (and N-1) additions for each output sample.
  63. [63]
    Impact of Delay-Difference Approximation on PID Controllers
    This paper aims to explore the advantages and disadvantages associated with implementing derivative action in a PID controller using a delay-difference ...
  64. [64]
    All About Audio Equalization: Solutions and Frontiers - MDPI
    This paper reviews developments and applications in audio equalization. A main emphasis is on design methods for digital equalizing filters.
  65. [65]
    [PDF] Introduction to Communication Systems
    Jan 17, 2014 · Chapter 4 discusses digital modulation, including linear modulation using constellations such as Pulse Amplitude Modulation (PAM) ...
  66. [66]
    Digital signal processors (DSPs) | TI.com - Texas Instruments
    Real-time signal processing. Our DSPs support event response times to as low as 10 ns with specialized instructions. · Smallest performance-per-power footprint.
  67. [67]
    [PDF] Overcoming Performance Limitations of Linear Control with Hybrid ...
    Jul 1, 2021 · In this paper, the objective is to show that HIGS-based control can, in fact, overcome all the fundamental LTI performance limitations that have.
  68. [68]
    [PDF] State-Space Representation of LTI Systems 1 Introduction - MIT
    The so-called state-space description provide the dynamics as a set of coupled first-order differential equations in a set of internal variables known as state.
  69. [69]
    [PDF] MUS420 Introduction to Linear State Space Models Outline
    By superposition (for LTI systems), the complete response of a linear system ... “state space” (or “state variable”) representation: x(n +1) = Ax(n) + ...
  70. [70]
    [PDF] Chapter 25 - Minimal State-Space Realization - MIT OpenCourseWare
    The Kalman decomposition does exactly that. Suppose (A B C D) are the matrices that specify the given nth-order LTI state-space model,.
  71. [71]
    [PDF] 1 Controllability and Observability
    1 Controllability and Observability. 1.1 Linear Time-Invariant (LTI) Systems. State-space: ˙x = Ax + Bu, x(0) = x0, y = Cx + Du. Dimensions: x ∈ Rn, u ∈ Rm ...
  72. [72]
    [PDF] Linear Systems I - Princeton University
    Nov 15, 2017 · 7 Solutions to LTI Systems: The Jordan Normal Form. 76. 7.1 Jordan ... of State-Space Realizations. 236. 19.3 Order of Minimal Realizations.<|control11|><|separator|>