Fact-checked by Grok 2 weeks ago

Single-input single-output system

A single-input single-output (SISO) is a dynamic in and that features exactly one input signal and one output signal, often modeled as a linear time-invariant (LTI) governed by differential equations or represented in the frequency domain via a , which is a relating the of the output to the input under zero initial conditions. These serve as the foundational building blocks for understanding more complex multi-input multi-output () configurations, enabling of , , and steady-state behavior through techniques like pole-zero and root locus methods. SISO systems are widely applied in engineering disciplines, including process for regulating variables such as , , or flow in chemical plants and . In mechanical and , they model phenomena like vibration damping in structures or speed in motors, where mechanisms adjust the input to achieve desired outputs. Adaptive strategies further enhance SISO performance by adjusting parameters in real-time for uncertain or varying conditions, as seen in applications from automotive to resource allocation.

Definition and Basics

Definition

A single-input single-output (SISO) system, in the context of and , is a that accepts a single input signal and generates a corresponding single output signal, representing a fundamental building block for modeling dynamic processes. This input-output relationship is commonly denoted in the time domain as u(t) \to y(t), where u(t) is the input and y(t) is the output, or in the Laplace domain as U(s) \to Y(s) for frequency-domain analysis. The foundational concepts underlying SISO systems emerged from early advancements in feedback control during the , evolving from designs for stable amplifiers and servo mechanisms, with key contributions from Harry Nyquist's stability criterion in 1932 and Hendrik Bode's methods in the late and . These developments formalized the analysis of single-variable systems in control literature, distinguishing them from more complex multi-variable configurations. Standard analysis of SISO systems relies on several key assumptions to enable tractable mathematical treatment: , ensuring superposition of responses; time-invariance, where system behavior remains consistent over shifts in time; , meaning outputs depend only on current and past inputs; and , where inputs uniquely determine outputs without . These properties are central to linear time-invariant (LTI) models prevalent in SISO studies. Visually, a SISO system is often illustrated via a consisting of an input block connected directly to a central block, which in turn produces the output block, emphasizing the absence of cross-coupling or multiple pathways. In contrast to multi-input multi-output () systems, which extend this framework to handle interactions among multiple signals, SISO configurations provide a simpler foundation for initial and .

Comparison with Multi-Input Multi-Output Systems

Single-input single-output (SISO) systems are characterized by a single input and a single output, resulting in no cross-interactions between multiple variables, in contrast to multi-input multi-output () systems, which involve multiple inputs and outputs with inherent between channels. This fundamental distinction leads to simpler analysis in SISO systems, where scalar transfer functions suffice, whereas systems require matrix-based representations to account for interactions, increasing design complexity. SISO systems offer advantages in and , including easier controller and lower computational requirements due to the absence of multivariable , making them suitable for many physical processes such as temperature regulation in HVAC systems. Their scalar nature also facilitates straightforward stability analysis and troubleshooting, often achieving performance comparable to more complex approaches in applications with minimal interactions. However, SISO systems are limited in handling coupled dynamics, where a single input-output pair cannot adequately address interdependencies, such as in flight control where roll and pitch motions are tightly , necessitating MIMO strategies to mitigate cross-coupling effects. In such cases, approximating systems with decoupled SISO loops may introduce performance degradation unless the system exhibits weak coupling. In practice, SISO approximations simplify MIMO design through techniques like decentralized control, particularly in diagonally dominant systems where off-diagonal elements are small, allowing independent SISO loops via input-output pairing based on the relative gain array to minimize interactions. For instance, in vapor compression cycles, diagonally dominant models enable effective decentralized SISO control that rivals full performance in .

Mathematical Representations

Transfer Function Representation

The transfer function provides a frequency-domain of a linear time-invariant (LTI) single-input single-output (SISO) , defined as the ratio of the of the output Y(s) to the of the input U(s), assuming zero initial conditions. This model, denoted G(s) = \frac{Y(s)}{U(s)}, captures the 's input-output dynamics without explicit reference to internal states. To derive the transfer function, apply the Laplace transform to the system's governing differential equation, which transforms time-domain derivatives into algebraic operations involving the complex variable s. For a general n-th order SISO LTI system described by \frac{d^n y}{dt^n} + a_{n-1} \frac{d^{n-1} y}{dt^{n-1}} + \cdots + a_1 \frac{dy}{dt} + a_0 y = b_m \frac{d^m u}{dt^m} + b_{m-1} \frac{d^{m-1} u}{dt^{m-1}} + \cdots + b_0 u, taking the Laplace transform yields G(s) = \frac{b_m s^m + b_{m-1} s^{m-1} + \cdots + b_0}{s^n + a_{n-1} s^{n-1} + \cdots + a_0}, where the degrees satisfy m \leq n for physical realizability. A canonical example is the second-order system, often arising in mechanical or electrical oscillators, with transfer function G(s) = \frac{\omega_n^2}{s^2 + 2 \zeta \omega_n s + \omega_n^2}, where \omega_n is the natural frequency and \zeta is the damping ratio. The is a whose numerator and denominator polynomials define the system's zeros and poles, respectively—key features that determine dynamic behavior in the s-plane. Zeros are the roots of the numerator where G(s) = 0, while poles are the roots of the denominator where G(s) \to \infty. Pole locations govern and : poles in the left-half s-plane yield decaying exponentials or damped oscillations, those on the imaginary axis produce sustained oscillations, and right-half plane poles lead to instability with growing responses. Zeros modify the response and without affecting stability directly. Transfer functions are classified as proper if the degree of the numerator is less than or equal to the degree of the denominator (relative degree \geq 0), ensuring bounded output for bounded input; strictly proper if the numerator degree is strictly less (relative degree > 0), common in physical systems; or improper otherwise, which may imply non-causal or unrealizable behavior. To find time-domain responses, partial fraction expansion decomposes G(s) into simpler terms for inverse Laplace transformation. For distinct poles, expand as G(s) = \sum \frac{A_i}{s - p_i} + polynomial terms if improper, where residues A_i are computed via the cover-up method or Heaviside expansion. For example, consider G(s) = \frac{1}{(s+1)(s+2)}; the expansion is \frac{1}{s+1} - \frac{1}{s+2}, with inverse e^{-t} - e^{-2t} for t \geq 0. This method extends to repeated or complex poles using generalized forms.

State-Space Representation

The state-space representation provides a vector-based for modeling the internal dynamics of a single-input single-output (SISO) linear time-invariant system, capturing the evolution of state variables in response to inputs and their relation to outputs. This approach, introduced by , describes the system's behavior through a set of first-order differential equations that explicitly account for the system's memory via the . For an nth-order SISO system, the state-space model is formulated as: \dot{x}(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t) where x(t) \in \mathbb{R}^n is the , u(t) \in \mathbb{R} is the scalar input, y(t) \in \mathbb{R} is the scalar output, A \in \mathbb{R}^{n \times n} is the system matrix governing state transitions, B \in \mathbb{R}^{n \times 1} is the input matrix, C \in \mathbb{R}^{1 \times n} is the output matrix, and D \in \mathbb{R} is the direct feedthrough term, which is often zero for strictly proper systems where the output depends solely on the states. This representation is particularly advantageous for computer simulation, multivariable extensions, and modern control design techniques like state feedback, as it handles the full multidimensional state dynamics. Key properties of the state-space model include and , which determine whether the system's states can be fully manipulated and inferred from inputs and outputs, respectively. A system is if there exists an input sequence that drives the state from any to any desired state in finite time; this is verified by the rank condition on the \mathcal{C} = [B \ AB \ \dots \ A^{n-1}B], which must have full rank n. Similarly, the system is if the initial state can be uniquely determined from the input and output over a finite , checked via the \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix}, which requires full rank n. These concepts, foundational to system decomposition and design, ensure that unobservable or uncontrollable modes do not affect practical implementation. The state-space model can be converted to the transfer function representation, yielding the input-output behavior as G(s) = C (sI - A)^{-1} B + D, where a minimal realization corresponds to a controllable and observable form with no pole-zero cancellations, achieving the lowest possible state dimension for the given transfer function. This equivalence highlights the state-space model's completeness, as it encompasses both external (input-output) and internal () descriptions. To facilitate and controller , state-space models are often transformed into s that simplify the matrices while preserving . The controllable canonical form, or form, places the matrix A in a companion structure for a second-order example as A = \begin{bmatrix} 0 & 1 \\ -a_0 & -a_1 \end{bmatrix}, with B = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, C = \begin{bmatrix} b_0 & b_1 \end{bmatrix}, and D = 0, where the coefficients a_i and b_i relate directly to the characteristic and numerator polynomials of the ; this form is particularly useful for pole placement and observer design due to its sparse, structured appearance.

Analysis Methods

Time-Domain Analysis

Time-domain analysis of single-input single-output (SISO) systems examines the system's output response to inputs as a function of time, providing insights into dynamic behavior for linear time-invariant (LTI) systems. This approach is essential for understanding how systems evolve from initial conditions or external excitations, revealing characteristics such as responsiveness and convergence to equilibrium. For LTI SISO systems described by a G(s), responses are typically computed using the of the input's transform multiplied by G(s)./02%3A_Transfer_Function_Models/2.04%3A_System_Response_to_Inputs) Common test inputs include the unit step (Heaviside function u(t)), unit (\delta(t)), and unit ramp (t u(t)). The , which models sudden changes like setpoint adjustments in control systems, is given by y(t) = \mathcal{L}^{-1} \left\{ \frac{G(s)}{s} \right\}, where \mathcal{L}^{-1} denotes the . This response highlights the system's to a steady value after an abrupt input. The , h(t) = \mathcal{L}^{-1} \{ G(s) \}, represents the system's output to a brief and serves as the basis for computing responses to arbitrary inputs via : y(t) = \int_{-\infty}^{\infty} h(\tau) u(t - \tau) \, d\tau. The ramp response, y(t) = \mathcal{L}^{-1} \left\{ \frac{G(s)}{s^2} \right\}, simulates linearly increasing inputs, such as constant velocity references, and is useful for assessing tracking performance./02%3A_Transfer_Function_Models/2.04%3A_System_Response_to_Inputs)/02%3A_Transfer_Function_Models/2.04%3A_System_Response_to_Inputs) The transient response describes the temporary behavior before reaching steady state, particularly pronounced in underdamped second-order systems with transfer function G(s) = \frac{\omega_n^2}{s^2 + 2 \zeta \omega_n s + \omega_n^2}, where \zeta is the damping ratio and \omega_n is the natural frequency. Key metrics include rise time T_r, the time to reach 10% to 90% of the final value; settling time T_s, the time to stay within a tolerance band (e.g., 2%) of the steady value, approximated as T_s \approx \frac{4}{\zeta \omega_n}; and percent overshoot \%OS, the maximum peak exceeding the steady value, given by \%OS = 100 \, e^{-\zeta \pi / \sqrt{1 - \zeta^2}} for $0 < \zeta < 1. These parameters quantify speed and oscillatory tendencies; for instance, lower \zeta increases overshoot but reduces rise time. Steady-state error e_{ss} measures the discrepancy between desired and actual output as t \to \infty in unity-feedback configurations. For a step input, e_{ss} = \frac{1}{1 + K_p}, where the position constant K_p = \lim_{s \to 0} G(s). For ramp inputs, e_{ss} = \frac{1}{K_v} with velocity constant K_v = \lim_{s \to 0} s G(s); for parabolic (acceleration) inputs, e_{ss} = \frac{1}{K_a} where K_a = \lim_{s \to 0} s^2 G(s). Type 0 systems (no free integrators) have finite K_p but infinite e_{ss} for ramps; higher types reduce errors for corresponding inputs. For LTI SISO systems, analytical solutions via inverse Laplace suffice, but numerical simulation extends analysis to nonlinear cases or validation. Methods like the Euler integration approximate solutions to differential equations: for \dot{x} = f(x, u), y = g(x), update x_{k+1} = x_k + \Delta t f(x_k, u_k). Higher-order schemes (e.g., Runge-Kutta) improve accuracy for complex responses, though LTI focus remains on exact transforms.

Frequency-Domain Analysis

Frequency-domain analysis of single-input single-output (SISO) systems examines the steady-state response to sinusoidal inputs of varying frequencies, providing insights into gain and phase characteristics that influence system behavior under periodic forcing. The frequency response function G(j\omega) is obtained by evaluating the transfer function along the imaginary axis, where s = j\omega, resulting in G(j\omega) = |G(j\omega)| \angle \phi(\omega). Here, |G(j\omega)| represents the magnitude of the system's gain at frequency \omega, typically plotted in decibels as $20 \log_{10} |G(j\omega)| for logarithmic scaling, while \phi(\omega) denotes the phase shift in degrees. This representation allows engineers to assess how the system amplifies or attenuates input signals and introduces delays across the frequency spectrum. A primary tool for visualizing the frequency response is the Bode plot, which separates the analysis into magnitude and phase components plotted against the logarithmic frequency axis \log \omega. The magnitude plot uses a log-log scale (dB versus \log \omega), revealing asymptotic behaviors such as straight-line approximations for low and high frequencies, while the phase plot employs a semi-log scale (degrees versus \log \omega). Corner frequencies, corresponding to the locations of system poles and zeros, mark transitions in the response; for instance, a single real pole introduces a slope of -20 dB per decade in the magnitude plot beyond its corner frequency, indicating attenuation at higher frequencies. These plots facilitate quick sketching and approximation of complex transfer functions by combining contributions from individual factors. The Nyquist plot offers a polar representation of G(j\omega) in the complex plane, tracing the locus as \omega varies from 0 to \infty. This curve starts at the low-frequency gain on the real axis and spirals or curves toward the origin at high frequencies, depending on the system's order and pole-zero configuration. Encircling the critical point -1 on this plot signals potential instability in feedback configurations, providing a graphical means to evaluate contour proximity to instability boundaries without solving characteristic equations. Complementing these, the Nichols plot overlays magnitude in dB against phase angle on rectangular axes, transforming the polar Nyquist data into a format that highlights stability margins such as gain and phase margins directly from intersections with constant-magnitude and constant-phase contours. This chart is particularly valuable in optimal control design for , enabling iterative adjustments to achieve desired performance metrics like robustness to parameter variations.

Stability and Performance

Stability Criteria

In single-input single-output (SISO) linear time-invariant (LTI) systems, stability is a fundamental property ensuring that system responses remain controlled under perturbations. Bounded-input bounded-output (BIBO) stability requires that any bounded input produces a bounded output, which for continuous-time systems holds if and only if all poles of the transfer function G(s) lie in the open left-half of the s-plane (i.e., \operatorname{Re}(p) < 0 for every pole p). Asymptotic stability, where the system's response converges to zero from any initial condition, is equivalent to BIBO stability for minimal realizations of continuous-time LTI SISO systems and also demands all poles in the open left-half plane. For discrete-time LTI SISO systems, BIBO stability (and asymptotic stability for minimal realizations) requires all poles to lie strictly inside the unit circle in the z-plane (i.e., |z| < 1 for every pole z). The Routh-Hurwitz criterion provides an algebraic method to check if all roots of the characteristic equation have negative real parts without explicitly solving for them, applicable to continuous-time systems. Consider the characteristic equation s^n + a_{n-1} s^{n-1} + \cdots + a_1 s + a_0 = 0 with all coefficients positive (a necessary but not sufficient condition for stability). The Routh array is constructed row by row, starting with the first two rows from the coefficients: \begin{array}{c|c} s^n & 1 & a_{n-2} & a_{n-4} & \cdots \\ s^{n-1} & a_{n-1} & a_{n-3} & a_{n-5} & \cdots \\ s^{n-2} & b_1 & b_2 & b_3 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array} where the elements in subsequent rows are computed as b_k = -\frac{1}{a_{n-1}} \det \begin{vmatrix} 1 & a_{n-2-k} \\ a_{n-1} & a_{n-3-k} \end{vmatrix} (generalized for lower rows). The system is asymptotically stable if and only if all elements in the first column of the complete have the same sign (typically positive), indicating no sign changes and thus no right-half plane roots. For example, for the equation s^3 + 2s^2 + 3s + 1 = 0, the is:
s^313
s^221
s^1\frac{5}{2}0
s^01
With no sign changes in the first column, the system is stable. The root locus technique visualizes the migration of closed-loop poles as a gain parameter K varies from 0 to \infty, aiding stability analysis in feedback with open-loop transfer function G(s)H(s) = K \frac{\prod (s - z_i)}{\prod (s - p_j)}. The loci originate at the open-loop poles (K = 0) and terminate at the open-loop zeros or at infinity along asymptotes; there are n - m branches approaching infinity, where n is the number of finite poles and m the number of finite zeros, with asymptote angles \theta_k = \frac{(2k+1)\pi}{n-m} for integer k \geq 0, centered on the centroid \sigma = \frac{\sum p_j - \sum z_i}{n-m}. Stability is determined by whether any locus segment enters the right-half s-plane for the operating range of K; encirclements of critical points or angle conditions via the root locus rules confirm pole locations. For discrete-time SISO systems, the Jury stability test extends the Routh-Hurwitz approach to verify if all roots of the characteristic polynomial P(z) = a_n z^n + a_{n-1} z^{n-1} + \cdots + a_0 = 0 (with a_n > 0) lie inside the unit circle. The table is formed iteratively: the first two rows are the coefficients [a_n, a_{n-1}, \dots, a_0] and the reversed [a_0, a_1, \dots, a_n]; subsequent rows compute elements via b_k = -\frac{1}{a_0} \det \begin{vmatrix} a_n & a_{n-1-k} \\ a_0 & a_{1+k} \end{vmatrix}, continuing similarly for lower rows using the previous two rows as the new "first" and "second." The necessary and sufficient conditions for include P(1) > 0, (-1)^n P(-1) > 0, and |a_0| < a_n, plus all elements in the first column of the table having the same sign (no sign changes). For example, consider the equation z^2 - 0.5z + 0.2 = 0:
RowCol 1Col 2Col 3
11-0.50.2
20.2-0.51
32-4.8
The first column elements are 1, 0.2, 2 (all positive, no sign changes), and the necessary conditions hold (|0.2| < 1, P(1) = 0.7 > 0, P(-1) = 1.7 > 0), so the is . This table-based check avoids root-finding and directly assesses discrete stability.

Performance Evaluation

Performance evaluation of single-input single-output (SISO) systems focuses on quantifying the quality of response after confirming , emphasizing metrics that capture speed, accuracy, and robustness to disturbances or uncertainties. In the , key specifications include the \omega_b, defined as the where the magnitude of the drops to -3 (i.e., |G(j\omega_b)| = -3 relative to its low-frequency ), which indicates the range of frequencies the system can effectively track. Peak overshoot M_p, often expressed as a percentage, measures the maximum deviation of the beyond the steady-state , reflecting oscillatory behavior; for second-order systems, it is approximated by M_p = e^{-\zeta \pi / \sqrt{1 - \zeta^2}}, where \zeta is the . T_s, the duration for the response to stay within a specified (typically 2%) of the final , is approximated as T_s \approx 4 / (\zeta \omega_n) for underdamped second-order systems, with \omega_n the natural frequency. These metrics ensure the system responds quickly and accurately to inputs, such as step changes, without excessive oscillation or prolonged transients. In the , performance is assessed through and margins, which provide insights into relative stability and robustness margins. The margin is defined as GM = -20 \log_{10} |G(j\omega_{pc})| (in dB), where \omega_{pc} is the phase crossover frequency at which the \phi(\omega_{pc}) = -180^\circ, indicating how much increase would lead to ; larger values (e.g., >6 dB) imply greater tolerance to variations. The margin is $180^\circ + \phi(\omega_{gc}), where \omega_{gc} is the crossover frequency with |G(j\omega_{gc})| = 1 (0 dB), measuring the additional before ; typical desirable values exceed 45° for adequate . These margins, derived from Bode plots, correlate with time-domain , such as margin relating to overshoot and margin to settling characteristics. Robustness evaluates to variations or uncertainties, crucial for real-world deployment where models are imperfect. For structured uncertainties (e.g., varying in blocks), the structured singular value \mu-analysis quantifies the smallest scaling that destabilizes the , providing a bound on robust via \mu < 1 / \gamma for a bound \gamma; this extends classical margins to handle correlated uncertainties without full MIMO complexity. Trade-offs are inherent: increasing bandwidth \omega_b reduces rise time (approximately t_r \approx 1.8 / \omega_b for second-order systems) for faster response but heightens noise , as high-frequency noise amplifies through the , potentially degrading accuracy. Thus, design balances these via controller tuning to meet application-specific requirements.

Applications and Examples

Control System Applications

Single-input single-output (SISO) systems are fundamental in feedback control applications, where a single control input regulates a single output to achieve desired performance in dynamic processes. In these setups, the controller processes the error between the reference and measured output to generate the input signal, enabling precise regulation despite disturbances or model uncertainties. This closed-loop structure is widely used in industrial automation, automotive systems, and robotics to maintain stability and meet specifications like settling time and overshoot. A prominent example of SISO feedback control is the proportional-integral-derivative (PID) controller, which combines three terms to minimize tracking error. The proportional term is given by K_p e(t), where K_p is the proportional gain and e(t) is the error signal; the integral term is K_i \int_0^t e(\tau) \, d\tau, with K_i as the integral gain to eliminate steady-state error; and the derivative term is K_d \frac{de(t)}{dt}, where K_d anticipates future errors to damp oscillations. The overall control law is u(t) = K_p e(t) + K_i \int_0^t e(\tau) \, d\tau + K_d \frac{de(t)}{dt}. This structure improves transient response and robustness in SISO plants modeled via transfer functions. Tuning PID parameters is critical for optimal performance, with the Ziegler-Nichols method providing a systematic closed-loop approach based on sustained oscillations. In this oscillation method, the proportional gain is increased until the system oscillates continuously at ultimate gain K_u and period P_u; then, PID gains are set as K_p = 0.6 K_u, K_i = 2 K_p / P_u, and K_d = K_p P_u / 8. This heuristic yields quarter-amplitude damping for many processes, though modifications may be needed for overdamped systems. The method, derived from empirical tests on pneumatic controllers, remains a benchmark for initial tuning in SISO applications. Lead-lag compensators enhance SISO system stability and performance by adjusting phase and gain in the frequency domain. A phase-lead compensator has the transfer function G_c(s) = \alpha \frac{\tau s + 1}{\alpha \tau s + 1}, where \alpha < 1 positions the zero closer to the origin than the pole, providing positive phase shift up to \sin^{-1} \frac{1 - \alpha}{1 + \alpha} at the geometric mean frequency, which improves phase margin and transient response. Lag compensators, conversely, use \alpha > 1 for low-frequency gain boost to reduce steady-state error without affecting high-frequency stability. These are cascaded with the plant in series to meet root locus or Bode design criteria. In automotive applications, SISO feedback control regulates vehicle speed via , where the plant is approximated by the G(s) = \frac{1}{m s} for a simplified inertial model, with input as force and output as . A controller in the loop adjusts to track a setpoint speed, compensating for disturbances; for instance, action corrects steady-state offsets from constant slopes, while reduces . Simulations and implementations show this achieves within 5% error under varying loads. DC motor position control exemplifies SISO servo mechanisms, using armature voltage as input to position the shaft output through a feedback loop. The plant relates voltage to angle via mechanical and electrical dynamics, often simplified to second-order form; a controller stabilizes the inherently unstable double-integrator behavior, enabling precise tracking for robotic arms. Experimental setups demonstrate settling times under 0.5 seconds with overshoot below 10% using tuned PID parameters. Adaptive control strategies extend SISO feedback by adjusting controller parameters in real-time to handle uncertainties or time-varying dynamics. For example, model-reference adaptive control (MRAC) tunes gains so that the plant output tracks a reference model's response, useful in applications like aircraft wing flutter suppression or robotic manipulators under payload changes. As of 2025, MRAC implementations in automotive active suspension systems demonstrate improved ride quality over varying road conditions. For implementation in SISO controllers, continuous-time designs are discretized using the , also known as Tustin's method, which maps the s-plane to the z-plane via s = \frac{2}{T} \frac{z - 1}{z + 1}, or equivalently z = \frac{1 + T s / 2}{1 - T s / 2}, where T is the sampling period. This trapezoidal integration preserves stability for systems with frequencies below the Nyquist limit and minimizes frequency warping for bandwidths up to $0.2 / T Hz. It is preferred over Euler methods for its better approximation in and compensator digitization.

Signal Processing Applications

In , single-input single-output (SISO) systems form the basis for linear time-invariant (LTI) filters that selectively modify the frequency spectrum of input signals to enhance or suppress specific components. These filters operate by convolving the input signal with an , effectively implementing the system's in either continuous or discrete time. Common analog examples include low-pass, high-pass, and band-pass filters, each designed to pass or attenuate frequency bands according to application needs. A fundamental is the , where a and in series form a SISO system with G(s) = \frac{1}{1 + RC s}, attenuating high frequencies above the cutoff f_c = \frac{1}{2\pi RC}. High-pass filters, such as an configuration with the in series and to , invert this behavior to block low frequencies, while band-pass filters combine elements of both to isolate a narrow range. For applications requiring a maximally flat response, the , introduced by Stephen Butterworth in 1930, provides an all-pole that achieves uniform gain up to the cutoff without ripples, making it ideal for audio and instrumentation signals. Digital SISO filters extend these concepts to discrete-time signals, categorized as (FIR) or (IIR) types. FIR filters compute the output as a weighted sum of recent inputs, y = \sum_{k=0}^{M-1} b_k x[n-k], ensuring inherent due to the absence of and finite-duration . In contrast, IIR filters incorporate past outputs for efficiency, y = \sum_{k=0}^{M-1} b_k x[n-k] - \sum_{k=1}^{N} a_k y[n-k], with the H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{k=0}^{M-1} b_k z^{-k}}{1 + \sum_{k=1}^{N} a_k z^{-k}}, relying on poles for sharp frequency selectivity but requiring checks. FIR filters are designed via the windowing method, which truncates the ideal —derived from the inverse of a desired —and multiplies it by a tapering (e.g., Hamming or ) to reduce sidelobe artifacts. IIR filters often use the to map analog prototypes, such as Butterworth low-pass designs, into the z-domain via s = \frac{2}{T} \frac{1 - z^{-1}}{1 + z^{-1}}, preserving and warping frequencies to match digital specifications. In audio equalization, SISO digital filters form cascaded chains to adjust frequency balance and reduce noise; for instance, IIR equalizers boost or cut specific bands while maintaining for natural sound reproduction. Similarly, in image processing, 1D convolutions treat row or column pixel sequences as SISO signals, applying FIR-like kernels for tasks such as along scanlines, effectively filtering one-dimensional projections of images. These applications leverage frequency-domain specifications, like ripple and , to tailor filter performance without closed-loop dynamics.

References

  1. [1]
    [PDF] LTI System and Control Theory
    as a single differential equation. In both cases we have one input and one output, so we refer to them as Single-Input, Single-Output (SISO) systems.
  2. [2]
    [PDF] Understanding Poles and Zeros 1 System Poles and Zeros - MIT
    As defined, the transfer function is a rational function in the ... The unforced response of a linear SISO system to a set of initial conditions ...
  3. [3]
    Control of Processes
    Apr 15, 2022 · Two typical forms of process control systems are single input – single output (SISO) and multiple-input – multiple-output (MIMO). SISO is ...
  4. [4]
    [PDF] Lecture 2: Systems Defined by Differential Equations
    A system with one input and one output is single-input, single-output (SISO). A system with more than one input or more than one output is multi-input multi ...
  5. [5]
    [PDF] Adaptive Control of Single-Input, Single-Output Linear Systems
    Absrruc-A procedure is presented for designing parameter-adaptive control for a single-input, single-output process admitting an essentiauy.
  6. [6]
    [PDF] Introduction to Control Theory And Its Application to Computing ...
    The theory discussed so far addresses linear, time-invariant, deterministic (LTI) systems with a single input (e.g., MaxUsers ) and a single output (e.g., RIS).
  7. [7]
    [PDF] April 03, 2023
    Apr 3, 2023 · This example is called a single-input-single-output "SISO" system because the size of control "u" is 1 x 1 and the output "y" is 1 x 1. In ...
  8. [8]
    Brief History of Feedback Control - F.L. Lewis
    Nyquist [1932]. He derived his Nyquist stability criterion based on the polar plot of a complex function. H.W. Bode in 1938 used the magnitude and phase ...
  9. [9]
    [PDF] Linear Systems I Lecture 2 - Solmaz S. Kia
    Lecture covers basic properties of state-space linear systems, LTV/LTI systems, causality, linearity, time invariance, impulse response, and transfer functions.
  10. [10]
    [PDF] Lecture 7: Introduction to Multivariable Control - Lehigh University
    SISO: NP+RS ⇒ RP. MIMO: NP+RS 6⇒ RP. RP is not achieved by the decoupling controller. Prof. Eugenio Schuster. ME 450 - System Identification and Robust Control.
  11. [11]
    Comparison of SISO and MIMO Control Techniques for a Diagonally ...
    Aug 7, 2025 · Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor ...
  12. [12]
  13. [13]
    [PDF] Introduction to Control Systems II
    Philosophy I: avoid MIMO complexity, try MIMO -> SISO. • Decentralized control: every input signal determined by feedback from one output. Pairing problem ...
  14. [14]
    Differential Equations - Transfer Functions - State Space
    The transfer function is the ratio of the Laplace transform of the output to that of the input, both taken with zero initial conditions. It is formed by taking ...
  15. [15]
    [PDF] Transfer Functions - Graduate Degree in Control + Dynamical Systems
    The transfer function can be obtained by inspection or by by simple algebraic manipulations of the differential equations that describe the systems. Transfer ...
  16. [16]
    The Inverse Laplace Transform by Partial Fraction Expansion
    This technique uses Partial Fraction Expansion to split up a complicated fraction into forms that are in the Laplace Transform table.Distinct Real Roots · Repeated Real Roots · Complex Roots
  17. [17]
    Mathematical Description of Linear Dynamical Systems
    Jul 18, 2006 · Mathematical Description of Linear Dynamical Systems. Author: R. E. KalmanAuthors Info & Affiliations. https://doi.org/10.1137/0301010 · PDF.
  18. [18]
    [PDF] On the General Theory of Control Systems
    Kalman's paper is interesting. The conception of controllability and observability is very natural and useful, particularly in conjunc- tion with the ...
  19. [19]
  20. [20]
    [PDF] analysis of linear state-space systems - F.L. Lewis
    Aug 6, 2008 · Note that the impulse response is given as the inverse Laplace transform of H(s). To compute the step response r(t), one may simply find.
  21. [21]
    Introduction: System Analysis - Control tutorials
    For second-order underdamped systems, the 1% settling time, $T_s$ , 10-90% rise time, $T_r$ , and percent overshoot, $Mp$ , are related to the damping ratio ...
  22. [22]
    Extras: Steady-State Error - Control Tutorials for MATLAB and Simulink
    These constants are the position constant (Kp), the velocity constant (Kv), and the acceleration constant (Ka). Knowing the value of these constants, as well as ...Missing: SISO | Show results with:SISO
  23. [23]
    [PDF] Lecture 5: Classical Feedback Control - Lehigh University
    Frequency Response [2.1]:. We use the Frequency Response to describe the response of the system to sinusoids of varying frequency. Prof. Eugenio Schuster. ME ...
  24. [24]
    [PDF] 2.161 Signal Processing: Continuous and Discrete
    The Bode sketching method pro vides an effective means of approximating the frequency response of a complex system by combining of the responses of simple first ...
  25. [25]
  26. [26]
    [PDF] Linear Feedback Control - Analysis and Design with MATLAB
    The variable H can be used in drawing the Nyquist plot and the Nichols chart ... SISO. (single input–single output) feedback control systems. It is developed ...<|control11|><|separator|>
  27. [27]
  28. [28]
    2.3 Stability in s-Domain: The Routh-Hurwitz Criterion of Stability
    Routh-Hurwitz Criterion of Stability: The system is stable if and only if all coefficients in the first column of a complete Routh Array are of the same sign.
  29. [29]
  30. [30]
    [PDF] A Simplified Stability Criterion for Linear Discrete Systems - DTIC
    This simplified test for linear discrete systems requires only half the Schur-Cohn determinants, and is applied directly in the z-plane.
  31. [31]
  32. [32]
    (PDF) PID Control Theory - ResearchGate
    Dec 23, 2015 · PID is a control system to determine the precision of an instrumentation system with the characteristics of feedback on the system.
  33. [33]
    [PDF] PID Control Theory | Semantic Scholar
    Feb 29, 2012 · The PID controller can be understood as a controller that takes the present, the past, and the future of the error into consideration, ...Missing: fundamentals | Show results with:fundamentals
  34. [34]
    [PDF] Optimum Settings for Automatic Controllers - David Di Ruscio
    We have seen that the addition of pre-act response gives both of these improvements. Page 7. ZIEGLER, NICHOLS-OPTIMUM SETTINGS FOR AUTOMATIC CONTROLLERS. 765.
  35. [35]
    [PDF] Design of Lead-Lag compensators for robust control - Unimore
    Abstract— In this paper three different methods for the syn- thesis of lead-lag compensators that meet design specifications.
  36. [36]
    (PDF) Modeling and Design of Cruise Control System with ...
    PDF | This paper presents PID controller with feed-forward control. The cruise control system is one of the most enduringly popular and important models.
  37. [37]
    [PDF] DC motor control position
    This paper will focus on the modeling and position control of a DC motor with permanent magnets. We first develop the differential equations and the Laplace ...Missing: SISO | Show results with:SISO
  38. [38]
    A method of analysing the behaviour of linear systems in terms of ...
    A simple method of analysing the behaviour of linear systems is outlined, in which time functions are represented by the sequences of numbers giving the ...
  39. [39]
    Linear Filter - an overview | ScienceDirect Topics
    Linear filters are standard linear (LTI) systems, but they are called filters because their reason for being is to alter the spectral shape of the signal ...Missing: SISO | Show results with:SISO
  40. [40]
    [PDF] Figure 1: The RC and RL lowpass filters
    Any filter whose transfer function contains at most one pole and one zero can be classified as a first-order filter.
  41. [41]
    [PDF] On the Theory of Filter Amplifiers - changpuak.ch
    October, 1930. T. 536. EXPERIMENTAL WIRELESS &. On the Theory of Filter Amplifiers.*. By S. Butterworth, M.Sc. (Admiralty Research Laboratory). HE orthodox ...
  42. [42]
    [PDF] FIR Filter Design by Windowing - MIT OpenCourseWare
    In this lecture, we will look at two FIR filter design techniques: by windowing and through the Parks-McClellan Algorithm. FIR Filter Design by Windowing. In ...
  43. [43]
    Lecture 15: Design of IIR Digital Filters, Part 2 - MIT OpenCourseWare
    Topics covered: Digital filter design using the bilinear transformation, frequency warping introduced by the bilinear transformation, algorithmic design ...
  44. [44]
  45. [45]
    [PDF] 1 Convolution - CS@Cornell
    Feb 27, 2013 · Convolution is an important operation in signal and image processing. Convolution op- erates on two signals (in 1D) or two images (in 2D): ...
  46. [46]
    Discrete-Time Signal Processing - MIT OpenCourseWare
    This class addresses the representation, analysis, and design of discrete time signals and systems. The major concepts covered include: