Fact-checked by Grok 2 weeks ago

Transfer function

In , a is a that represents the relationship between the input and output of a linear time-invariant (LTI) , expressed as the ratio of the of the output to the of the input under zero initial conditions. This representation transforms differential equations governing the 's behavior into algebraic equations in the Laplace domain, facilitating analysis of response, , and characteristics. For finite-dimensional s, the transfer function takes the form of a G(s) = \frac{b(s)}{a(s)}, where s is the variable, a(s) is the corresponding to the 's poles, and b(s) is the numerator defining the zeros. Transfer functions are derived by applying the to the system's governing equations, such as state-space models where G(s) = C(sI - A)^{-1}B + D, with matrices A, B, C, D describing the . Key properties include invariance under state-space coordinate transformations and the interpretation of poles as eigenvalues of the system matrix, which determine , while zeros influence the and steady-state gain given by G(0). In practice, transfer functions enable representations for cascading systems and are essential for designing controllers, predicting responses to inputs like step or sinusoidal signals, and analyzing phenomena such as overshoot and in applications ranging from electrical circuits and to mechanical systems. The concept underscores the advantages of frequency-domain analysis over time-domain methods, as it simplifies convolution operations into multiplications and supports tools like Bode plots for gain and phase margins.

General Concepts

Definition and Scope

The concept of the transfer function originated in the developed by in the late 19th century, which provided methods for solving linear differential equations describing electrical circuits and transmission lines. Heaviside's approach emphasized practical engineering analysis in the transform domain, laying the groundwork for later frequency-domain representations of dynamic systems. The specific term "transfer function" was introduced in by M. F. Gardner and J. L. Barnes in their seminal work on transients in linear systems, where it was defined using Laplace transforms to model input-output relations. In general, a transfer function is a that describes the relationship between the input and output of a dynamic in the . For linear time-invariant (LTI) systems, it is expressed as the of the output's transform to the input's transform; in the Laplace domain, this is H(s) = \frac{Y(s)}{X(s)}, where Y(s) and X(s) are the Laplace transforms of the output y(t) and input x(t), respectively, and s is the complex variable. Equivalently, in the domain for steady-state sinusoidal analysis, it takes the form H(j\omega) = \frac{Y(j\omega)}{X(j\omega)}, where \omega denotes and j is the ; this representation encapsulates both the magnitude () and shift introduced by the . The scope of transfer functions is primarily limited to LTI systems, where the principle of superposition applies, enabling the system's response to be characterized solely by this input-output ratio without dependence on initial conditions or time variations. This assumption facilitates straightforward analysis using transforms like the Laplace or for continuous- and discrete-time systems, respectively, assuming familiarity with differential equations and integral transforms. While extensions to nonlinear or time-varying systems have been developed—such as describing functions that approximate nonlinear elements with quasi-linear gains—the classical transfer function remains centered on and time invariance for its analytical tractability.

Mathematical Foundations

The mathematical foundations of the transfer function are rooted in transform-domain analysis, particularly through the , which converts time-domain differential equations into algebraic equations in the s-domain for linear time-invariant (LTI) systems. The unilateral of a function f(t) for t \geq 0 is defined as \mathcal{L}\{f(t)\} = F(s) = \int_{0}^{\infty} f(t) e^{-st} \, dt, where s = \sigma + j\omega is a complex variable with real part \sigma ensuring convergence for \sigma greater than some value, and j = \sqrt{-1}. This integral representation allows the transfer function H(s) of an LTI system to be expressed as the ratio H(s) = Y(s)/X(s), where Y(s) and X(s) are the s of the output y(t) and input x(t), respectively, under zero initial conditions. For stable systems, where the region of convergence includes the imaginary axis (\sigma = 0), the Fourier transform relates directly to the Laplace transform by evaluating H(s) along this axis, yielding the frequency response H(j\omega). Specifically, the steady-state frequency response is obtained as H(j\omega) = \mathcal{L}\{h(t)\}(s = j\omega), where h(t) is the impulse response, provided the system is stable and the integral converges. This connection enables analysis of sinusoidal inputs and outputs in the frequency domain, as the Fourier transform of the output is Y(j\omega) = H(j\omega) X(j\omega). The transfer function H(s) is typically represented in pole-zero form as H(s) = K \frac{\prod_{i=1}^{m} (s - z_i)}{\prod_{k=1}^{n} (s - p_k)}, where K is the constant, z_i are the zeros (roots of the numerator), and p_k are the poles (roots of the denominator), assuming n \geq m for proper systems. The locations of poles and zeros in the complex s-plane determine the system's dynamic behavior, such as (all poles in the left half-plane for continuous-time systems) and characteristics. A key property is the uniqueness theorem for LTI systems: the transfer function H(s) uniquely determines the zero-state response to any input under zero initial conditions, as the output is given by the inverse Laplace transform of Y(s) = H(s) X(s). This follows from the fact that the impulse response, and thus the convolution integral for arbitrary inputs, is fully specified by H(s). The transfer function arises from applying the Laplace transform to the system's describing differential equations, facilitating algebraic manipulation without delving into time-domain solutions.

Linear Time-Invariant Systems

Continuous-Time Systems

In continuous-time linear time-invariant (LTI) systems, the relationship between the input signal x(t) and the output signal y(t) is typically governed by a linear constant-coefficient of the form \sum_{k=0}^{N} a_k \frac{d^k y(t)}{dt^k} = \sum_{m=0}^{M} b_m \frac{d^m x(t)}{dt^m}, where the coefficients a_k and b_m are real constants with a_N = 1 and N \geq M. This equation encapsulates the system's dynamics, with higher-order derivatives representing inertial or reactive behavior in physical systems like oscillators or electrical . To derive the transfer function, the unilateral Laplace transform is applied to both sides of the differential equation, assuming zero initial conditions for simplicity. The Laplace transform converts differentiation into multiplication by s, yielding the algebraic relation Y(s) = H(s) X(s), where the transfer function is the rational function H(s) = \frac{\sum_{m=0}^{M} b_m s^m}{\sum_{k=0}^{N} a_k s^k}. This H(s) fully characterizes the system's input-output behavior in the s-domain, with poles and zeros determined by the roots of the denominator and numerator polynomials, respectively. The assumption of zero initial conditions isolates the forced response, which is the primary focus for transfer function analysis. A representative example arises in with a series , where the input voltage v_{in}(t) drives a R, L, and C in series, and the output is the voltage v_{out}(t) across the . The governing is L C \frac{d^2 v_{out}(t)}{dt^2} + R C \frac{d v_{out}(t)}{dt} + v_{out}(t) = v_{in}(t), leading to the transfer function H(s) = \frac{V_{out}(s)}{V_{in}(s)} = \frac{1}{L C s^2 + R C s + 1}. This second-order form illustrates resonant behavior, with the natural \omega_0 = 1/\sqrt{LC} and \zeta = R/(2\sqrt{L/C}). Such transfer functions are ubiquitous in analog filter design and circuit analysis. The time-domain impulse response h(t) of the system is obtained as the of H(s), denoted h(t) = \mathcal{L}^{-1}\{H(s)\}, which represents the output when the input is a \delta(t). For any input x(t), the output follows from the integral y(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau, providing a direct link between the s-domain transfer function and time-domain behavior. This convolution property underpins the utility of transfer functions in predicting system responses to arbitrary inputs. For analysis of sinusoidal steady-state responses, the is evaluated by substituting s = j\omega into the transfer function, yielding H(j\omega), a complex-valued function that specifies the |H(j\omega)| () and \angle H(j\omega) () applied to a sinusoid of \omega. This approach simplifies the study of periodic inputs, as the steady-state output is a scaled and phase-shifted version of the input sinusoid. of such systems requires all poles of H(s) to reside in the open left-half of the complex s-plane.

Discrete-Time Systems

In discrete-time linear time-invariant (LTI) systems, the transfer function is defined in the z-domain using the , which provides a frequency-domain analogous to the for continuous-time systems. The of a discrete-time signal x, assumed causal for typical system analysis, is given by X(z) = \sum_{n=0}^{\infty} x z^{-n}, where z is a complex variable, and the expression converges within a of () in the z-plane that depends on the signal's properties. This transform enables the algebraic manipulation of difference equations to derive system behavior. For an LTI system described by a linear constant-coefficient difference equation, such as an autoregressive moving average (ARMA) model, the transfer function H(z) is the ratio of the of the output Y(z) to the input X(z). Specifically, for the difference equation y = \sum_{k=0}^{M} b_k x[n-k] - \sum_{m=1}^{N} a_m y[n-m], taking the Z-transform yields H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{k=0}^{M} b_k z^{-k}}{1 + \sum_{m=1}^{N} a_m z^{-m}}, with the ROC determined by the poles of the denominator, assuming the system is causal and stable. This rational form characterizes the system's frequency response and facilitates design in digital signal processing. A simple example is a first-order digital filter with the transfer function H(z) = \frac{1 + z^{-1}}{1 - 0.5 z^{-1}}, which represents a low-pass filter with a zero at z = -1 and a pole at z = 0.5. The numerator introduces moving-average smoothing, while the denominator provides feedback for infinite impulse response behavior. The unit impulse response h of the discrete-time LTI system is the inverse Z-transform of H(z), denoted h = \mathcal{Z}^{-1}\{H(z)\}. The output y for any input x is then the convolution sum y = \sum_{k=-\infty}^{\infty} h x[n-k], which fully characterizes the system's action in the time domain. Discrete-time transfer functions often arise from sampling continuous-time systems, linking the two domains through discretization methods like the bilinear transform.

Derivation from Differential Equations

The derivation of a transfer function for linear time-invariant (LTI) systems begins with the underlying assumptions of , linearity, time-invariance, and zero initial conditions, which ensure that the system's response depends solely on the input and that the transform domain representation simplifies to a of polynomials. In the continuous-time domain, the transfer function H(s) is obtained by applying the Laplace transform to the system's differential equation. Consider a general linear constant-coefficient differential equation of order n: a_n \frac{d^n y(t)}{dt^n} + a_{n-1} \frac{d^{n-1} y(t)}{dt^{n-1}} + \cdots + a_0 y(t) = b_m \frac{d^m u(t)}{dt^m} + b_{m-1} \frac{d^{m-1} u(t)}{dt^{m-1}} + \cdots + b_0 u(t), where y(t) is the output, u(t) is the input, and the coefficients a_i and b_j are constants with a_n \neq 0. The Laplace transform of the k-th derivative is \mathcal{L}\left\{\frac{d^k y(t)}{dt^k}\right\} = s^k Y(s) - s^{k-1} y(0) - s^{k-2} y'(0) - \cdots - y^{(k-1)}(0), where Y(s) = \mathcal{L}\{y(t)\}. Assuming zero initial conditions, this simplifies to s^k Y(s). Applying the Laplace transform to both sides of the equation yields: \left( a_n s^n + a_{n-1} s^{n-1} + \cdots + a_0 \right) Y(s) = \left( b_m s^m + b_{m-1} s^{m-1} + \cdots + b_0 \right) U(s), where U(s) = \mathcal{L}\{u(t)\}. Thus, the transfer function is the rational function H(s) = \frac{Y(s)}{U(s)} = \frac{b_m s^m + b_{m-1} s^{m-1} + \cdots + b_0}{a_n s^n + a_{n-1} s^{n-1} + \cdots + a_0}. /6:_The_Laplace_Transform/6.2:_Transforms_of_derivatives_and_ODEs) For a concrete illustration, consider the second-order differential equation \frac{d^2 y(t)}{dt^2} + 2 \frac{dy(t)}{dt} + y(t) = u(t). With zero initial conditions, the Laplace transform gives (s^2 + 2s + 1) Y(s) = U(s), so H(s) = \frac{1}{s^2 + 2s + 1}. This rational form can be factored into pole-zero representation for further analysis. In the discrete-time domain, a parallel derivation uses the Z-transform for difference equations, again assuming zero initial conditions. The Z-transform of a shifted sequence is given by the shift theorem: \mathcal{Z}\{y[n+1]\} = z Y(z) - z y{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, where Y(z) = \mathcal{Z}\{y\} and z is the complex variable; with y{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0, this becomes z Y(z). For a general linear constant-coefficient difference equation of order n, \sum_{k=0}^n a_k y[n-k] = \sum_{k=0}^m b_k u[n-k], applying the Z-transform term by term yields \left( \sum_{k=0}^n a_k z^{n-k} \right) Y(z) = \left( \sum_{k=0}^m b_k z^{n-k} \right) U(z), leading to the transfer function H(z) = \frac{Y(z)}{U(z)} = \frac{\sum_{k=0}^m b_k z^{n-k}}{\sum_{k=0}^n a_k z^{n-k}} = \frac{b_m + b_{m-1} z^{-1} + \cdots + b_0 z^{-m}}{a_n + a_{n-1} z^{-1} + \cdots + a_0 z^{-n}}, often expressed in terms of negative powers of z for implementation.

Key Properties and Analysis

The of the transfer function |H(j\omega)| quantifies the system's or factor for sinusoidal inputs at \omega, determining how much the output is scaled relative to the input at that . For steady-state , the H(0) represents the low-frequency or constant input response, obtained by evaluating the transfer function at s = 0, and equals the ratio of the steady-state output to a constant input for stable systems. Poles of the transfer function H(s), which are the roots of the denominator polynomial, govern the system's natural modes and transient dynamics; poles in the right-half (positive real part) indicate , as they lead to exponentially growing responses. For discrete-time LTI systems, asymptotic requires all poles of H(z) to lie strictly inside the unit circle |z| < 1 in the z-plane. Zeros, the roots of the numerator polynomial, modify the response by introducing attenuation or phase shifts, and their placement can suppress resonances associated with nearby poles. of linear time-invariant (LTI) systems is ensured if all poles lie in the left-half plane, a condition verifiable without computing roots via the Routh-Hurwitz criterion, which constructs a Routh array from the characteristic polynomial coefficients and checks for sign changes in the first column—zero changes imply . The transient response, such as to a unit step input, is derived by multiplying the transfer function by $1/s and applying the inverse Laplace transform, typically via partial fraction expansion into terms corresponding to each pole. Each pole contributes a mode to the response, with the time constant \tau = 1/|\Re(p_k)| for pole p_k dictating the decay rate of that exponential component, where dominant poles (closest to the imaginary axis) primarily shape the settling time. For closed-loop systems, the Nyquist stability criterion assesses stability by plotting the open-loop transfer function G(j\omega) in the complex plane and counting the net number of clockwise encirclements N of the critical point -1 + j0; the closed-loop system is stable if N = -P, where P is the number of open-loop right-half plane poles (equivalently, the plot makes P counterclockwise encirclements of -1). This graphical method, rooted in the , provides insight into relative stability through gain and phase margins without direct pole computation.

Applications in Signal Processing

Common Transfer Function Families

In signal processing, several canonical families of transfer functions form the foundation for analog filter design, each tailored to balance trade-offs like passband flatness, roll-off steepness, phase linearity, and computational efficiency. These families—Butterworth, Chebyshev, Bessel, and elliptic—provide prototypical magnitude and phase responses that can be transformed into high-pass, band-pass, or band-stop configurations as needed. The Butterworth family offers a maximally flat magnitude response in the passband, ideal for applications requiring minimal variation in gain across the desired frequency range. For a low-pass prototype, the squared magnitude response is expressed as |H(j\omega)|^2 = \frac{1}{1 + \left( \frac{\omega}{\omega_c} \right)^{2N}} where N is the filter order and \omega_c is the cutoff frequency at which the gain drops to $1/\sqrt{2} (or -3 dB). This formulation ensures no ripples in the passband, with the roll-off rate increasing with higher N, but at the cost of a relatively gradual transition to the stopband. The poles lie on a circle in the left-half s-plane, ensuring stability. This design originated in the work of , who derived it for amplifier filters with uniform response characteristics. The Chebyshev family achieves steeper roll-off compared to Butterworth by permitting equiripple behavior in either the passband or stopband, enabling compact higher-order filters with sharper transitions. Type I Chebyshev filters exhibit ripple in the passband (controlled by ripple factor \epsilon) and a monotonic stopband, with the magnitude response |H(j\omega)| = \frac{1}{\sqrt{1 + \epsilon^2 T_N^2 \left( \frac{\omega}{\omega_c} \right)}} where T_N(\cdot) denotes the N-th order Chebyshev polynomial of the first kind. Type II (inverse Chebyshev) filters reverse this, featuring a monotonic passband and equiripple stopband attenuation, often using a similar form but with poles and zeros derived from the reciprocal. Pole placement involves inverse hyperbolic functions, such as \sinh^{-1}(1/\epsilon)/N for scaling the real part and \cosh^{-1}(1/\epsilon)/N for the imaginary components in the s-plane locations. These variants provide flexibility for noise-sensitive or attenuation-critical applications, though the passband ripple in Type I can introduce distortion. The family draws from Chebyshev polynomials and was adapted for electrical filters in mid-20th-century network synthesis. The Bessel family emphasizes linear phase (constant group delay) across the passband to preserve waveform shape and minimize overshoot or ringing in time-domain signals, making it suitable for pulse or transient applications. The transfer function approximates an all-pole low-pass response using reverse Bessel polynomials \theta_N(s), normalized for delay: H(s) = \frac{\theta_N(0)}{\theta_N \left( s / \omega_0 \right)} where \omega_0 sets the delay scale, and the polynomials satisfy a recurrence relation \theta_{n+1}(s) = (2n+1) \theta_n(s) + s^2 \theta_{n-1}(s) with initial conditions \theta_0(s) = 1, \theta_1(s) = s + 1. This yields a gentle roll-off but excellent phase linearity up to near the cutoff, with poles clustered near the origin for low-order filters. The design ensures maximally flat group delay derivatives at DC. It was formalized by for delay networks with flat frequency characteristics. The elliptic family (also known as Cauer or Zolotarev) delivers the sharpest transition band for a given order by allowing equiripple in both passband and stopband, optimizing selectivity with finite zeros in the stopband for superior attenuation near the cutoff. The magnitude squared response involves the Jacobian elliptic sine function \mathrm{sn}(u, k), where the characteristic function is |H(j\omega)|^2 = \frac{1}{1 + \epsilon^2 \mathrm{sn}^2 \left( N \cdot K(k) \cdot \frac{\omega}{\omega_c}, k \right)} with modulus k related to the selectivity factor, complete elliptic integral K(k), and ripple parameter \epsilon. This results in the most efficient use of order for compact filters, though with potential for higher sensitivity to component tolerances. Poles and zeros are derived from elliptic integrals, placing zeros on the jω-axis for stopband notches. The approach stems from elliptic function theory applied to network synthesis by Wilhelm Cauer.

Filter Design and Realization

Filter design begins with specifying the desired frequency response characteristics, including the passband where the signal amplitudes are preserved within acceptable ripple limits and the stopband where attenuation exceeds a specified level. These specifications define the cutoff frequencies, transition bandwidth, and attenuation requirements, guiding the selection of filter order and type to meet performance criteria. Analog prototypes, such as those from established filter families, serve as the foundation for designing the transfer function H(s). For digital implementation, the bilinear transform is commonly applied to convert the continuous-time transfer function to a discrete-time equivalent, preserving stability and avoiding aliasing through frequency warping. The transformation substitutes s = \frac{2}{T} \frac{1 - z^{-1}}{1 + z^{-1}}, where T is the sampling period, yielding the digital transfer function H(z). This method maps the entire jω-axis to the unit circle in the z-plane, ensuring a one-to-one correspondence for the frequency response. Realization of analog filters often employs topologies like the Sallen-Key circuit for second-order sections, which uses resistors, capacitors, and an operational amplifier to implement the transfer function with low component count and simplicity. In this unity-gain configuration, the transfer function for a low-pass filter is given by H(s) = \frac{1}{1 + s(R_1 C_1 + R_1 C_2 + R_2 C_2) + s^2 R_1 R_2 C_1 C_2}, where component values are chosen to match the desired poles. For digital filters, infinite impulse response (IIR) designs derived from analog prototypes are realized using recursive difference equations, such as y = \sum_{k=0}^{M} b_k x[n-k] - \sum_{k=1}^{N} a_k y[n-k], while finite impulse response (FIR) filters use non-recursive forms for linear phase properties. Performance in filter realization is evaluated through metrics like group delay, defined as \tau_g(\omega) = -\frac{d}{d\omega} \arg[H(e^{j\omega})], which measures the frequency-dependent time delay of signal envelopes, and phase distortion, arising from non-linear phase responses that can alter waveform shapes. These effects are inherent to the chosen topology; for instance, IIR filters may exhibit variable group delay in the passband, necessitating equalization in applications like audio processing, whereas FIR realizations can achieve constant group delay at the cost of higher computational demands.

Applications in Control Engineering

Frequency Response and Bode Plots

In control engineering, the frequency response of a linear time-invariant system is characterized by substituting s = j\omega into the transfer function H(s), resulting in H(j\omega), where \omega is the angular frequency in radians per second. The magnitude |H(j\omega)| represents the amplification or attenuation of the input sinusoid, while the phase \angle H(j\omega) indicates the shift in the output waveform relative to the input. This analysis reveals how the system behaves under steady-state sinusoidal excitation, aiding in the design of feedback controllers. Bode plots offer a convenient graphical tool for visualizing the frequency response, consisting of two curves: the magnitude plot, which graphs $20 \log_{10} |H(j\omega)| in decibels (dB) against \log_{10} \omega on a log-log scale, and the phase plot, which graphs \angle H(j\omega) in degrees against \log_{10} \omega on a semi-log scale. These logarithmic scales compress the frequency range, making it easier to identify dominant dynamics over decades of frequency variation. The plots were introduced by Hendrik Bode to simplify the analysis of feedback amplifiers and have become standard in control system design. To construct Bode plots, asymptotic approximations are used for hand-sketching, leveraging the transfer function's pole-zero structure. For a factor (j\omega / z + 1) from a zero at -z, the magnitude asymptote is flat (0 dB/decade) below the corner frequency \omega = z and rises at +20 dB/decade above it, with the phase transitioning from 0° to +90°. Conversely, a pole at -p contributes a flat asymptote below \omega = p, a -20 dB/decade slope above, and a phase shift from 0° to -90°. The overall plot sums these contributions, with corner frequencies marking transitions; for complex conjugate poles or zeros, the slope changes by ±40 dB/decade and phase by ±180°. Near corners, a 3 dB correction refines the magnitude, but asymptotes suffice for initial assessments. Consider the open-loop transfer function H(s) = \frac{K}{s(s+1)}, a common example in position control systems with an integrator and a pole at -1. Assuming K = 1 for simplicity, the low-frequency magnitude asymptote is 0 dB, dropping at -20 dB/decade due to the pole at s = 0 (integrator corner at \omega = 0), then steepening to -40 dB/decade beyond the corner at \omega = 1 rad/s. The phase starts at -90° from the integrator, decreasing further to -180° after \omega = 1, illustrating high-frequency roll-off that attenuates disturbances. This -40 dB/decade slope highlights the system's attenuation of high-frequency noise, a key consideration in controller tuning. Bode plots also enable stability assessment through gain and phase margins in feedback systems. The gain crossover frequency \omega_{gc} is where |H(j\omega_{gc})| = 1 (0 dB), and the phase margin is defined as $180^\circ + \angle H(j\omega_{gc}), measuring how much additional phase lag can be tolerated before the Nyquist encirclement causes instability; margins exceeding 45°–60° typically ensure good relative stability. The phase crossover frequency \omega_{pc} is where \angle H(j\omega_{pc}) = -180^\circ, and the gain margin is $20 \log_{10} (1 / |H(j\omega_{pc})|) in dB, indicating the gain increase permissible before instability; values above 6–10 dB are desirable. These metrics, derived directly from the plots, guide compensator design to meet performance specifications without root locus analysis.

Stability and Transient Behavior

In control engineering, the transfer function plays a central role in analyzing closed-loop systems, where feedback is incorporated to improve performance. For a unity feedback configuration, the closed-loop transfer function is given by H_{cl}(s) = \frac{G(s)}{1 + G(s)}, where G(s) is the open-loop transfer function of the forward path. This formulation arises from the standard block diagram reduction, ensuring that the output relates to the input through the feedback loop dynamics. The transient behavior of such systems is largely determined by the locations of the closed-loop poles in the s-plane, particularly for second-order approximations. The damping ratio \zeta, defined as \zeta = -\frac{\operatorname{Re}(p)}{|p|} for a complex conjugate pole pair p, governs the oscillatory nature and overshoot of the step response; higher \zeta reduces percentage overshoot, with values between 0.4 and 0.8 typically yielding acceptable responses without excessive ringing. Settling time, the duration for the response to stay within 2% of the final value, approximates t_s \approx \frac{4}{\sigma}, where \sigma = -\operatorname{Re}(p) represents the real part's magnitude, highlighting how pole placement influences response speed. To assess how closed-loop poles vary with system parameters like gain, the root locus method provides a graphical tool. Developed by Walter R. Evans, it plots the trajectories of roots of the characteristic equation $1 + K G(s) H(s) = 0 as the gain K increases from 0 to \infty. Sketching rules include starting loci at open-loop poles and ending at zeros (or infinity), with segments on the real axis to the left of an odd number of poles-plus-zeros; for a system with two poles and no finite zeros, asymptotes emanate at angles of ±90° relative to the real axis, aiding quick visualization of stability boundaries. For parameter-independent stability checks without computing roots, the Routh-Hurwitz criterion constructs an array from the characteristic polynomial coefficients to determine if all poles lie in the left-half s-plane. For a polynomial a_n s^n + a_{n-1} s^{n-1} + \cdots + a_0 = 0, the array's first row is [a_n, a_{n-2}, \dots] and second [a_{n-1}, a_{n-3}, \dots], with subsequent rows computed via determinants like the third row's first element b_1 = -\frac{1}{a_{n-1}} \det \begin{vmatrix} a_n & a_{n-2} \\ a_{n-1} & a_{n-3} \end{vmatrix}; stability requires no sign changes in the first column and no zero rows (handled by auxiliary polynomials). This tabular method, originating from 's work, efficiently identifies unstable roots by counting sign changes equal to right-half-plane poles. Stability margins, such as gain and phase margins derived from frequency responses, complement these time-domain analyses by quantifying proximity to instability.

Applications in Imaging

Optical and Spatial Transfer Functions

In optical imaging systems, the optical transfer function (OTF) characterizes the system's ability to transfer spatial information from the object to the image plane, particularly in the spatial frequency domain. It is defined as the Fourier transform of the point spread function (PSF), which describes the blurring effect of the optical system on a point source. Mathematically, for a one-dimensional case, the OTF is given by \text{OTF}(f) = \int_{-\infty}^{\infty} \text{PSF}(x) e^{-j 2\pi f x} \, dx, where f is the spatial frequency and x is the spatial coordinate. This formulation arises from the linear shift-invariant nature of incoherent optical imaging, where the image intensity is the convolution of the object intensity with the PSF, transforming to a multiplication in the frequency domain by the OTF. The modulation transfer function (MTF), a key component of the OTF, is the magnitude of the OTF, \text{MTF}(f) = |\text{OTF}(f)|, and quantifies the reduction in contrast (or modulation) of sinusoidal patterns at spatial frequency f as they pass through the system. It ranges from 1 at zero frequency (perfect low-frequency transfer) to 0 at the cutoff frequency, providing a measure of resolution and image quality. For a diffraction-limited lens with a circular aperture, the MTF exhibits a characteristic shape determined solely by wave optics. The normalized MTF for spatial frequency f up to the cutoff f_c = D / (\lambda f) (where D is the aperture diameter, \lambda is the wavelength, and f is the focal length) is \text{MTF}(f) = \frac{2}{\pi} \left[ \acos\left(\frac{f}{f_c}\right) - \frac{f}{f_c} \sqrt{1 - \left(\frac{f}{f_c}\right)^2} \right]. This function decreases monotonically from 1 at f = 0 to 0 at f = f_c, reflecting the fundamental limit imposed by diffraction on contrast transfer. Optical aberrations, such as defocus, degrade the OTF by introducing phase errors in the pupil function, which alter the autocorrelation that yields the OTF. Defocus specifically shifts the locations of zeros in the OTF, creating regions of zero modulation transfer at lower frequencies than in the diffraction-limited case and reducing overall image contrast. For instance, increasing defocus broadens the PSF and induces phase shifts that manifest as additional zeros in the MTF, limiting the effective bandwidth of the system.

Digital Imaging and Convolution

In digital imaging, transfer functions are employed to model linear filtering operations in the discrete domain, facilitating processing such as blurring, sharpening, and noise reduction. The core operation is 2D convolution, where the output image y[m,n] is computed as y[m,n] = \sum_k \sum_l h[k,l] \, x[m-k, n-l], with x[m,n] as the input image and h[k,l] as the filter kernel or impulse response. This spatial-domain process corresponds to multiplication in the frequency domain via the 2D discrete Fourier transform (DFT): Y(u,v) = H(u,v) \, X(u,v), where H(u,v) is the discrete transfer function representing the filter's frequency response, and u, v denote spatial frequencies. The DFT enables efficient implementation of convolution for large images by avoiding direct summation, which is computationally intensive for kernels larger than small neighborhoods. A representative example is the transfer function for a blur kernel, such as uniform motion blur along one axis, where the spatial kernel approximates a rectangular function, yielding H(u,v) as a sinc function in the frequency domain: H(u,v) = \mathrm{sinc}(u \cdot L), with L as the blur length in pixels. This attenuates high frequencies, simulating degradation from camera shake or defocus. In contrast, an ideal low-pass filter for smoothing has H(u,v) = \mathrm{rect}(u/W) for some cutoff width W, passing low frequencies while blocking higher ones to reduce noise, though its inverse spatial kernel is a sinc function prone to ringing artifacts. These forms highlight how transfer functions quantify blurring as frequency attenuation, often discretized from continuous optical models like the . Deconvolution reverses such degradation by estimating the original image from the blurred output. Basic inverse filtering divides in the frequency domain: the restored spectrum is \hat{X}(u,v) = Y(u,v) / H(u,v), followed by inverse DFT to obtain the spatial image. However, noise amplification occurs where |H(u,v)| is small (high frequencies), rendering direct inversion unstable. To address this, the Wiener filter introduces regularization, yielding \hat{X}(u,v) = \left[ \frac{H^*(u,v)}{|H(u,v)|^2 + K} \right] Y(u,v), where H^* is the complex conjugate and K is a noise-to-signal ratio parameter that suppresses noise while preserving edges; this minimizes mean-square error between the estimate and true image. The approach, extended to 2D imaging from Wiener's original time-series work, balances fidelity and stability in restoration tasks like microscopy or astronomy. Applications include edge enhancement, where high-pass transfer functions emphasize discontinuities by attenuating low frequencies. A simple high-pass filter is H_{HP}(u,v) = 1 - H_{LP}(u,v), subtracting a low-pass response from unity to boost mid-to-high frequencies, thereby sharpening boundaries without altering overall contrast. This technique is widely used in medical imaging for delineating structures like vessel edges, as it amplifies subtle gradients while the linear model assumes shift-invariance.

Extensions to Non-Linear Systems

Volterra and Wiener Series

The Volterra series provides a mathematical framework for representing the input-output behavior of nonlinear dynamical systems, extending the concept of linear to higher-order interactions. Introduced in the context of nonlinear analysis, it expresses the output y(t) as an infinite sum of multidimensional convolutions involving symmetric kernel functions h_n(\tau_1, \dots, \tau_n), which capture the n-th order nonlinear effects. The general form of the Volterra series is given by y(t) = \sum_{n=1}^{\infty} \int_{-\infty}^{\infty} \cdots \int_{-\infty}^{\infty} h_n(\tau_1, \dots, \tau_n) \prod_{i=1}^n x(t - \tau_i) \, d\tau_1 \cdots d\tau_n, where x(t) is the input signal, and the kernels h_n are time-invariant for stationary systems. These kernels generalize the of linear systems, with higher-order terms accounting for phenomena like harmonic generation and intermodulation not captured by linear models. When truncated to the first-order term, the Volterra series reduces to the familiar linear convolution, where the first-order kernel h_1(\tau) corresponds to the impulse response h(t), and its Laplace transform yields the linear transfer function H(s) = \mathcal{L}\{h_1(t)\}. This connection highlights the Volterra series as a natural extension of linear transfer function theory to nonlinear regimes. The Wiener series builds upon the Volterra framework by incorporating orthogonalization to improve identifiability and computational efficiency, particularly for systems with Gaussian inputs. Developed by Norbert Wiener, it decomposes the nonlinear transformation into a cascade of linear dynamic blocks interspersed with memoryless nonlinearities represented by orthogonal polynomials, such as Hermite polynomials. This structure, often termed G-functionals, ensures that the basis functions are mutually orthogonal under white Gaussian noise excitation, facilitating kernel estimation through cross-correlation techniques. A practical application arises in modeling amplifier distortion, where a quadratic nonlinearity can be approximated using second-order to quantify intermodulation products in RF systems. For instance, in a bipolar transistor amplifier, the second-order kernel captures even-order distortions like second-harmonic generation, enabling prediction of output spectra under multitone inputs.

Describing Functions and Approximations

The method serves as a quasi-linear approximation technique for examining nonlinear systems under sinusoidal excitation, enabling the prediction of phenomena like limit cycles in feedback configurations. Developed in the mid-20th century, it gained prominence through foundational analyses in . By replacing the nonlinearity with an amplitude-dependent gain, the method extends linear frequency-domain tools to nonlinear settings, assuming the system's linear portion sufficiently attenuates higher harmonics. For a memoryless nonlinearity \phi(e) with sinusoidal input e(t) = A \sin(\omega t), the output y(t) = \phi(e(t)) contains a fundamental harmonic component whose complex gain relative to the input defines the describing function N(A) \approx Y(A, \omega)/X(A, \omega), obtained by averaging over the higher-order harmonics via Fourier analysis. This gain N(A) is typically real and amplitude-dependent for static nonlinearities, independent of \omega. The approximation treats the nonlinear system as linearly equivalent with transfer function N(A) G(s), where G(s) is the linear dynamics. Consider the saturation nonlinearity, a common actuator limitation where the output follows the input with unit slope up to levels \pm 1 and remains constant thereafter. For input amplitude A \leq 1, N(A) = 1. For A > 1, N(A) = \frac{2}{\pi} \left[ \arcsin\left(\frac{1}{A}\right) + \frac{\sqrt{1 - (1/A)^2}}{A} \right]. This expression arises from integrating the first sine coefficient over one period, capturing the clipped waveform's fundamental response; as A increases, N(A) decreases toward zero, reflecting reduction due to saturation. Stability and limit cycle prediction employ a Nyquist-like criterion: plot the frequency response G(j\omega) and the locus -1/N(A) as A varies from small to large values. Intersections indicate potential s at amplitude A and frequency \omega satisfying $1 + N(A) G(j\omega) = 0. If the -1/N(A) curve encircles the critical point -1 clockwise (or based on the number of right-half plane poles), the may be stable; otherwise, unstable. This graphical method highlights how nonlinearity can induce sustained oscillations absent in linear approximations. Despite its utility, the describing function method assumes a pure sinusoidal input to the nonlinearity and discards higher harmonics, rendering it approximate and most accurate for systems where the linear part functions as a to suppress distortions. It may fail for multi-variable inputs or strong harmonic interactions, contrasting with more precise but computationally intensive approaches like expansions.

References

  1. [1]
    Transfer Functions — Dynamics and Control - APMonitor
    Oct 28, 2024 · Transfer functions are input to output representations of dynamic systems. One advantage of working in the Laplace domain (versus the time domain) is that ...
  2. [2]
    Transfer Function of Control System - Electrical4U
    Apr 17, 2024 · A transfer function represents the relationship between the output signal of a control system and the input signal, for all possible input values.Poles and Zeros of Transfer... · Concept of Transfer Function
  3. [3]
    Transfer functions
    transfer function . 2010 John Wiley & Sons, Inc. WIREs Comp Stat 2010 2 ... largely to Oliver Heaviside (1850–1925).1,2. With the advent of digital ...
  4. [4]
    [PDF] Chapter Eight - Transfer Functions
    This chapter introduces the concept of the transfer function, which is a compact description of the input/output relation for a linear system. Combining ...
  5. [5]
    Transfer Function - an overview | ScienceDirect Topics
    ### Summary of Transfer Function Content
  6. [6]
    [PDF] NONLINEAR SYSTEMS - MIT OpenCourseWare
    The describing-function characterization of a nonlinear element parallels the transfer-function characterization of a linear element. If the transfer function ...
  7. [7]
    Differential Equations - The Definition - Pauls Online Math Notes
    Nov 16, 2022 · In this section we give the definition of the Laplace transform. We will also compute a couple Laplace transforms using the definition.
  8. [8]
    [PDF] Transfer Functions - Graduate Degree in Control + Dynamical Systems
    The transfer function is a convenient representation of a linear time invari- ant dynamical system. Mathematically the transfer function is a function.
  9. [9]
    [PDF] The Laplace Transform - Analog Devices
    The frequency response can then be found by evaluating the transfer function along the imaginary axis, that is, by replacing each s with . jT. While both ...<|control11|><|separator|>
  10. [10]
    [PDF] Understanding Poles and Zeros 1 System Poles and Zeros - MIT
    The transfer function at any value of s may therefore be determined geometrically from the pole-zero plot, except for the overall “gain” factor K. The magnitude ...
  11. [11]
    2.1: System Poles and Zeros - Engineering LibreTexts
    Jun 19, 2023 · Thus, p 0 is a pole of the transfer function if G ⁡ ( p 0 ) = ∞ . The poles and zeros of first and second-order system models are described ...
  12. [12]
    5.3 Properties of Transfer Function (TF) - Signals and Systems [Book]
    The transfer function of a system is the Laplace transform of its impulse response under assumption of zero initial conditions. Replacing 's' variable with ...
  13. [13]
    [PDF] Lecture 3 ELE 301: Signals and Systems - Princeton University
    The zero-state response, which is the output of the system with all initial conditions zero. If H is a linear system, its zero-input response is zero. ...
  14. [14]
    LaPlace Transforms and Transfer Functions – Control Systems
    Solve the Differential Equation by LaPlace Transform. Where R=1, L=1 and V is a step function, so V(s)=1/s. Exercise 2.2. Determine the Transfer Function Y(s)/I ...
  15. [15]
    [PDF] 21 Continuous-Time Second-Order Systems - MIT OpenCourseWare
    There may or may not also be zeros in the transfer function, depending on whether there are derivative terms on the right-hand side of the differential equation ...
  16. [16]
    [PDF] 4.3 Laplace Transform in Linear System Analysis
    The system response in general has two components: zero-state response due to external forcing signals and zero-input response due to system initial conditions.
  17. [17]
    [PDF] Linear Time-Invariant Dynamical Systems - Duke People
    A system G that maps an input u(t) to an output y(t) is a linear system if and only if. (α1y1(t) + α2y2(t)) = G[α1u1(t) + α2u2(t)].
  18. [18]
    [PDF] Laplace transform
    For the parallel RLC circuit shown in Fig. 5, find the step response of v(t) for t ≥ 0 using the Laplace transform method. The initial conditions are iL(0 ...
  19. [19]
    [PDF] Lecture 10 Sinusoidal steady-state and frequency response
    conclusion if the convolution system is stable, the response to a sinusoidal input is asymptotically sinusoidal, with the same frequency as the input, ...
  20. [20]
    [PDF] Continuous-Time (CT) systems process CT signals.
    • LTICT system dynamics are governed by a linear differential equation which relates the system ... transfer function, 𝐻(𝑠), form a Laplace transform pair: ℎ(𝑡) ↔ ...
  21. [21]
    [PDF] Transform Domain Representation of Discrete Time Signals The Z ...
    The z-transform for discrete time signals is the counterpart of the Laplace transform for the continuos-time signals. • The z-transform is a generalization ...
  22. [22]
    [PDF] Lecture 1: Reviewing DTFT and ζ-transform - People @EECS
    The z-transform generalizes the DTFT by replacing the complex number ejω with general complex number z ∈ C. 2.1 Definition and Convergence. A discrete-time ...
  23. [23]
    [PDF] 2.161 Signal Processing: Continuous and Discrete
    The reverse holds as well: if we are given the difference equation, we can define the system transfer function. Example 2. Find the transfer function (expressed ...
  24. [24]
  25. [25]
    Z Transform of Difference Equations
    Thus, taking the z transform of the general difference equation led to a new formula for the transfer function in terms of the difference equation coefficients.
  26. [26]
    Extras: Difference Equations and System Representations
    To derive the transfer function in discrete form, a mathematical tool very similar to the Laplace transform called the z-transform will be used. The z-transform ...
  27. [27]
    Introduction: System Analysis
    ... DC gain is the value of the transfer function evaluated at $s$ = 0. For first-order systems of the forms shown, the DC gain is $k_{dc} = b/a$ . Time Constant.
  28. [28]
    [PDF] ECE 380: Control Systems - Purdue Engineering
    One must not cancel out this common zero and pole in the transfer function before testing for stability. The reason for this is that, even though the pole ...
  29. [29]
    [PDF] Explaining the Routh-Hurwitz criterion
    Sep 15, 2019 · The paper gave an explanation and two short proofs of the Routh-Hurwitz criterion. ... Routh-Hurwitz stability criterion, revisited: The case. 22.
  30. [30]
    The Unit Step Response - Swarthmore College
    To find the unit step response, multiply the transfer function by the unit step (1/s) and the inverse Laplace transform using Partial Fraction Expansion.. With ...
  31. [31]
    [PDF] Time Response
    Since the pole of the transfer function is at a, we can say the pole is located at the reciprocal of the time constant, and the farther the pole from the ...
  32. [32]
    Determining Stability using the Nyquist Plot - Swarthmore College
    The greater the gain margin, the more stable the system. If the gain margin is zero, the system is marginally stable. (Note: the text also shows that the ...Missing: properties Routh- Hurwitz
  33. [33]
    [PDF] Feedback Systems
    Nyquist's original paper giving his now famous stability criterion was published in the Bell Systems Technical Journal in 1932 [Nyq32]. More accessible ...
  34. [34]
    [PDF] On the Theory of Filter Amplifiers - changpuak.ch
    On the Theory of Filter Amplifiers.*. By S. Butterworth, M.Sc. (Admiralty Research Laboratory). HE orthodox theory of electrical wave filters has been ...
  35. [35]
    [PDF] Chebyshev Filters
    These filters are named from their use of the Chebyshev polynomials, developed by the Russian mathematician Pafnuti Chebyshev (1821-1894).
  36. [36]
    [PDF] CHAPTER 8 ANALOG FILTERS
    Elliptical filters are sometime referred to as Cauer filters after the network theorist Wilhelm Cauer. ... Filter Section," 1970 IEEE ISCT Digest. Technical ...
  37. [37]
    TUTORIAL: Introduction to Filter Design
    Within the passband, the magnitude of the transfer function may vary between zero and Gp. When it enters the stop band, the magnitude of the transfer function ...
  38. [38]
    Digitizing Analog Filters with the Bilinear Transformation
    The bilinear transform may be defined by where $ c$ is an arbitrary positive constant that we may set to map one analog frequency precisely to one digital ...
  39. [39]
    [PDF] Filter Design: IIR
    ELEN E4810: Digital Signal Processing. Topic 8: Filter Design: IIR. 1. Filter Design Specifications. 2. Analog Filter Design. 3. Digital Filters from Analog ...
  40. [40]
    Sallen and Key Filter Design for Second Order Filters
    Sallen and Key Filter topology is the basic building block used to implement various second-order RC filter circuits using a single op-amp.
  41. [41]
    [PDF] Realization of Digital Filters
    Transfer Function, H(z). • The systems that are described by the difference equations, can be represented by structures consisting of an.
  42. [42]
    Phase and Group Delay | Introduction to Digital Filters
    The phase delay gives the time delay in seconds experienced by each sinusoidal component of the input signal.
  43. [43]
    [PDF] EELE503
    Group Delay τd(ω) is used as the criterion to evaluate phase nonlinearity. Group Delay is constant for all frequencies in the passband of an ideal filter. 8.
  44. [44]
    Network Analysis and Feedback Amplifier Design : Bode W. Hendrik
    Oct 9, 2020 · Network Analysis and Feedback Amplifier Design. by: Bode W. Hendrik; Cousins James H. Publication date: 1945. Topics: Technology, Equations, C- ...
  45. [45]
    [PDF] Feedback Control Theory - University of Toronto
    Striking developments have taken place since 1980 in feedback control theory. The subject has be- come both more rigorous and more applicable.<|control11|><|separator|>
  46. [46]
    [PDF] Frequency Response, Filters & Bode Plots
    Oct 19, 2005 · Bode (bo-dee), bode plots are just frequency response curves made on semilog paper where ... Negative feedback is more common and is used as a ...
  47. [47]
    Feedback control of dynamic systems : Franklin, Gene F
    Nov 13, 2022 · Feedback control of dynamic systems. xvii, 910 p. : 25 cm. Includes bibliographical references and index. Notes with cut off text on running head on some pages.<|control11|><|separator|>
  48. [48]
  49. [49]
    [PDF] 11.5 Optical Transfer Function
    The OTF is given by the Fourier transform of the PSF. The PSF is the square of the absolute value of the Fourier transform of the pupil function. From the ...
  50. [50]
    Microscopy Basics | The Point Spread Function - Zeiss Campus
    The OTF is the Fourier transform of the PSF and describes how spatial frequency is affected by blurring. In the frequency domain, the specimen is equivalent to ...
  51. [51]
    Modulation Transfer Function - SPIE
    The modulation transfer function is the magnitude response of the optical system to sinusoids of different spatial frequencies.
  52. [52]
  53. [53]
    [PDF] Report on the Design of a Simulator Program ... - UC Berkeley EECS
    MTF(s) = (2/pi)( arccos(s) - s*sqrt( 1 - s*s ) i ). 3 1 If the assumptions of incoherence, and well-foclussing do not hold then the MTF c a n be specified as ...Missing: f_c) - f_c) f_c)^
  54. [54]
    Diffraction Modulation Transfer Function - SPIE
    Diffraction MTF is a wave-optics calculation for which the only variables (for a given aperture shape) are the aperture diameter D, wavelength λ, and focal ...
  55. [55]
    [PDF] Realizations of Focus Invariance in Optical/Digital Systems with ...
    As a standard system becomes more and more defocused, the PSF size increases and zeros appear in the MTF where phase shifts of π occur. The resulting zeros in ...
  56. [56]
    [PDF] Tutorial Solutions 4 Optical Transfer Function
    Plot the OTF under defocus at the Strehl limit and compare with the result for the case at ideal focus. What is the criteria for the OTF to have a zero.
  57. [57]
    2D Convolution in Image Processing - Technical Articles
    Nov 30, 2018 · This article provides insight into two-dimensional convolution and zero-padding with respect to digital image processing.
  58. [58]
    [PDF] Image Enhancement: Filtering in the Frequency Domain
    The DFT and Image Processing. To filter an image in the frequency domain: 1. Compute F(u,v) the DFT of the image. 2. Multiply F(u,v) by a filter function H(u,v).
  59. [59]
    [PDF] 7 Image Processing: Discrete Images - People | MIT CSAIL
    Hint: Is the modulation-transfer function discrete? Is it periodic? 7-3 A discrete two-dimensional system performs a convolution, as in the pre- vious exercise.Missing: 2D | Show results with:2D
  60. [60]
    [PDF] Invertible Motion Blur in Video
    The Fourier transform of the box filter is a sinc function which contains zeros, thereby making deblurring ill-posed. ... Image Processing 5 (jun), 996–1011.<|control11|><|separator|>
  61. [61]
    [PDF] Digital Image Processing Lectures 9 & 10 - Colorado State University
    Image Transforms-2D Discrete Fourier Transform (DFT). Properties of 2-D DFT ... convolution first using the direct method and then using the DFT. y(0,0) ...
  62. [62]
    [PDF] Review of Sampling, Deconvolution, Linear Systems
    Point spread function (PSF): The blur kernel of a lens. • “Diffraction ... Boreman, “Modulation Transfer Function in Optical and ElectroOptical Systems”, SPIE ...
  63. [63]
    [PDF] Iterative Wiener filters for image restoration - Signal Processing ...
    [1] H. C. Andrews and B. R. Hunt, Digital Image Restoration. Engle- wood Cliffs, NJ: Prentice-Hall, 1977.
  64. [64]
    [PDF] Image Restoration by Short Space Spectral Subtraction. - DTIC
    Feb 20, 1979 · A new image restoration system that is applicable to the problem of restoring an image degraded by blurring and additive noise is presented.
  65. [65]
    Digital Filters - Frequency Filters
    Frequency filters process images in the frequency domain, using Fourier transform. Types include lowpass (smoothing), highpass (edge enhancement), and bandpass ...
  66. [66]
    [PDF] Image Enhancement: Frequency Domain - UMBC CSEE
    Sep 20, 2010 · Image enhancement in the frequency domain involves computing the DFT, filtering, and using low-pass, high-pass, or band-pass filters.
  67. [67]
    The Volterra and Wiener theories of nonlinear systems
    Oct 5, 2022 · The Volterra and Wiener theories of nonlinear systems ; Publication date: 1980 ; Topics: System analysis, Linear operators, Nonlinear theories.
  68. [68]
    Nonlinear problems in random theory : Wiener, N - Internet Archive
    Nonlinear problems in random theory. by: Wiener, N. Publication date: 1958. Publisher: M.I.T.. Collection: internetarchivebooks; inlibrary ...
  69. [69]
    [PDF] Lumped Nonlinear System Analysis with Volterra Series. - DTIC
    Spectrum and Distortion Analysis Example. Consider the single stage untuned bipolar transistor amplifier cir- cuit shown in Fig. 6-1. This circuit ...
  70. [70]
    [PDF] MULTIPLE-INPUT DESCRIBING FUNCTIONS AND NONLINEAR ...
    The theory of automatic control has been advanced in important ways during recent years, particularly with respect to stability and optimal control.
  71. [71]
    [PDF] Nonlinear Systems
    including the describing function method. We introduce the ... spring and nonlinear viscous damping described by CIY+C2ylyl, find a state equation.