Minimum phase
In signal processing and control theory, a minimum-phase system is defined as a linear time-invariant (LTI) system that is causal and stable, and whose inverse is also causal and stable.[1] This property distinguishes minimum-phase systems from other LTI systems with the same magnitude frequency response, as they exhibit the smallest possible phase lag (or group delay) among all causal stable systems sharing that magnitude response.[2] For discrete-time systems, the minimum-phase condition is equivalent to all poles and zeros of the system's transfer function H(z) lying strictly inside the unit circle in the z-plane, ensuring both the system and its inverse $1/H(z) are causal and stable.[1] In continuous-time systems, the analogous condition requires all poles and zeros to reside in the open left half of the s-plane.[3] Minimum-phase systems also feature impulse responses that are "energy-concentrated" toward the beginning, meaning a larger fraction of the total energy accumulates earlier in time compared to non-minimum-phase counterparts with identical magnitude responses.[2] The significance of minimum-phase systems lies in their practical advantages, particularly the existence of a stable inverse, which enables applications such as signal equalization, deconvolution, and inverse filtering to recover distorted inputs without instability.[4] In control systems, they offer improved stability margins and faster transient responses, making them preferable for high-gain feedback designs over non-minimum-phase systems that introduce additional phase lag.[5] Additionally, spectral factorization techniques can uniquely recover a minimum-phase transfer function from its magnitude-squared response alone, a property exploited in filter design and system identification.[1]Fundamentals
Definition via Inverse System
A minimum-phase system is characterized by the property that both the system and its inverse are causal and stable. This definition applies to linear time-invariant (LTI) systems in both continuous and discrete domains, ensuring that the inverse filter can recover the input signal without introducing instability or non-causality. In contrast, non-minimum-phase systems possess zeros outside the region of stability (right half-plane in continuous time or outside the unit circle in discrete time), rendering their inverses either unstable or requiring non-causal implementations.[2][6] In discrete-time systems, the z-transform provides the framework for deriving this condition. Consider a rational transfer function H(z) = \frac{B(z)}{A(z)}, where B(z) and A(z) are polynomials with roots representing zeros and poles, respectively. For H(z) to be causal and stable, all poles (roots of A(z)) must lie inside the unit circle in the z-plane, i.e., |p_k| < 1 for each pole p_k. The inverse system is given by G(z) = 1/H(z) = \frac{A(z)}{B(z)}, so the poles of G(z) are the zeros of H(z) (roots of B(z)). For G(z) to be causal and stable, these zeros must also satisfy |z_m| < 1 for each zero z_m, ensuring the region of convergence includes the exterior of the unit circle while maintaining stability. Thus, a minimum-phase H(z) requires all zeros inside the unit circle, guaranteeing a causal and stable inverse.[6] For continuous-time systems, the Laplace transform yields an analogous derivation. The transfer function is H(s) = \frac{B(s)}{A(s)}, with causality and stability requiring all poles in the open left half-plane (LHP), i.e., \operatorname{Re}(p_k) < 0. The inverse G(s) = 1/H(s) = \frac{A(s)}{B(s)} has poles at the zeros of H(s). Stability and causality of G(s) demand these zeros also lie in the open LHP, \operatorname{Re}(z_m) < 0, so the region of convergence is the right half-plane. Therefore, minimum-phase systems in continuous time have all zeros in the LHP, ensuring the inverse filter $1/H(s) is both causal and stable.[7] The concept of minimum phase originated in the context of filter design and control theory during the mid-20th century, with H. W. Bode formalizing it in his seminal work on feedback amplifiers. Bode introduced the term to describe systems amenable to stable equalization in network analysis, emphasizing the implications for phase minimization in stable designs.[8]Discrete-Time Example
A simple example of a minimum-phase system in discrete time is the first-order finite impulse response (FIR) filter with transfer function H(z) = 1 + 0.5 z^{-1}.[9] This system has no poles, as it is FIR, and a single zero found by solving $1 + 0.5 z^{-1} = 0, which yields z = -0.5. Since the magnitude of this zero is |z| = 0.5 < 1, it lies inside the unit circle, confirming that all zeros are inside the unit circle and thus the system is minimum phase.[9][10] As per the definition via inverse system, the reciprocal G(z) = 1 / H(z) must be stable and causal. Here, G(z) = 1 / (1 + 0.5 z^{-1}) = z / (z + 0.5), which has a pole at z = -0.5 inside the unit circle, ensuring stability for a causal realization.[9][10] To illustrate the distinction, consider a non-minimum-phase counterpart obtained by flipping the zero outside the unit circle: H'(z) = 1 + 2 z^{-1}. The zero is at z = -2, with |z| = 2 > 1. The inverse G'(z) = 1 / H'(z) = z / (z + 2) has a pole at z = -2 outside the unit circle, rendering it unstable for causal implementation.[9] The impulse response of the minimum-phase system h = \delta + 0.5 \delta[n-1] concentrates energy early (at n=0 and n=1), with values [1, 0.5]. In contrast, the non-minimum-phase version has h' = \delta + 2 \delta[n-1], with values [1, 2], shifting more energy later despite the finite duration. This front-loading in minimum-phase systems arises from the inward zero placement.[9] Numerical evaluation of the frequency response for H(z) shows a magnitude |H(e^{j\omega})| = |1 + 0.5 e^{-j\omega}|, which peaks at 1.5 for \omega = 0 and dips to 0.5 at \omega = \pi. The phase \arg(H(e^{j\omega})) starts at 0 and reaches a minimum lag of about -0.4636 radians at \omega = \pi/2, demonstrating the minimal phase distortion characteristic. For the non-minimum-phase H'(z), the magnitude scales larger (peaking at 3 at \omega = 0), and the phase lag is more negative, exceeding -1.107 radians at \omega = \pi/2.[9]System Properties
Causality Requirement
In linear time-invariant (LTI) systems, causality is defined as the property where the output at any given time depends solely on the current input and all past inputs, with no dependence on future inputs.[11] This ensures that the system's impulse response h(t) (or h in discrete time) is zero for negative time arguments, i.e., h(t) = 0 for t < 0 or h = 0 for n < 0.[12] For minimum-phase systems, the placement of zeros plays a critical role in guaranteeing a causal inverse. In discrete-time systems, all zeros lie inside the unit circle (|z| < 1), while in continuous-time systems, all zeros are in the left-half s-plane (\Re(s) < 0). This configuration ensures that the inverse transfer function $1/H(z) or $1/H(s) has its poles (the original system's zeros) positioned such that a causal region of convergence (ROC) can be selected without introducing non-causal components in the time-domain response.[2] Specifically, the causal ROC for the inverse is exterior to the outermost pole, and since these poles are inside the unit circle (discrete) or left-half plane (continuous), the ROC includes the unit circle, maintaining both causality and stability.[13] A mathematical demonstration of this causality in the inverse uses partial fraction expansion of the Z-transform for discrete-time systems. Consider a rational transfer function H(z) = \frac{\prod (z - z_k)}{\prod (z - p_m)} with all |z_k| < 1 and |p_m| < 1. The inverse is H^{-1}(z) = \frac{\prod (z - p_m)}{\prod (z - z_k)}, with poles at z_k. Assuming a proper expansion, the partial fraction form is H^{-1}(z) = \sum_k \frac{A_k}{1 - z_k z^{-1}} (for the causal ROC |z| > \max |z_k| < 1), where residues A_k are computed as A_k = \prod_m (z_k - p_m) / \prod_{j \neq k} (z_k - z_j). The inverse Z-transform yields h^{-1} = \sum_k A_k z_k^n u, a sum of causal geometric sequences starting at n = 0, with no terms for n < 0 (future-dependent or anti-causal components).[13] This absence of negative-time terms confirms the causal nature, as the ROC exterior to all poles ensures right-sided sequences. A similar expansion applies in the Laplace domain for continuous-time, where terms like e^{z_k t} u(t) (with \Re(z_k) < 0) are causal. In contrast, non-minimum-phase systems with at least one zero outside the unit circle (discrete) or in the right-half plane (continuous) result in a non-causal inverse. For example, consider the discrete-time system H(z) = 1 - 2z^{-1}, with a zero at z = 2 (|z| = 2 > 1). The inverse H^{-1}(z) = \frac{z}{z - 2} = \frac{1}{1 - 2z^{-1}} has a pole at z = 2. To achieve stability (ROC including |z| = 1), the anti-causal ROC |z| < 2 must be chosen, yielding h^{-1} = -2^n u[-n-1], an anti-causal sequence extending to n \to -\infty and requiring future inputs for realization.[10] This illustrates how zeros outside the stability region force non-causal filtering in the inverse.Stability Implications
For linear time-invariant (LTI) systems, bounded-input bounded-output (BIBO) stability requires that every bounded input produces a bounded output. In discrete time, this is achieved when all poles of the transfer function H(z) lie inside the unit circle in the z-plane (|z| < 1), ensuring the impulse response is absolutely summable.[14] In continuous time, BIBO stability holds when all poles of H(s) are in the open left-half s-plane (real part < 0), making the impulse response absolutely integrable.[14] The minimum-phase property extends this stability by requiring all zeros to also reside in the stable region: inside the unit circle for discrete-time systems or in the left-half plane for continuous-time systems. Assuming the system is already stable (poles in the stable region), the minimum-phase condition ensures the inverse system H_{\text{inv}}(z) = 1/H(z) or H_{\text{inv}}(s) = 1/H(s) has poles precisely at the original system's zeros, which are thus in the stable region, guaranteeing the inverse is also BIBO stable.[14][2] This follows directly from the pole-zero structure: the denominator of the inverse transfer function consists of the numerator factors of the original H, relocating the zeros to pole positions without altering their locations relative to the stability boundary.[14] The stability margin of the inverse is tied to the locations of these zeros; for instance, in discrete time, the minimum distance from any zero to the unit circle serves as a quantitative indicator, where greater separation enhances robustness against perturbations that could push poles toward instability.[2] In continuous time, analogous margins relate to the minimum real part distance of zeros from the imaginary axis.[14] This property has key practical implications, enabling stable equalization and deconvolution in signal processing applications, as the inverse can be implemented without amplifying instabilities. For contrast, all-pass systems, which are stable but possess zeros outside the unit circle (or in the right-half plane), yield unstable inverses, precluding such operations.[14][2]Frequency Domain Analysis
Discrete-Time Response
In discrete-time systems, the frequency response of a minimum-phase filter is characterized by its discrete-time Fourier transform (DTFT), H(e^{j\omega}), obtained by evaluating the z-transform H(z) on the unit circle |z| = 1. For such filters, all zeros of H(z) lie inside the unit circle in the z-plane, ensuring causality, stability, and the minimal phase shift for a given magnitude response |H(e^{j\omega})|. This placement of zeros minimizes the phase distortion compared to non-minimum-phase counterparts with equivalent magnitude characteristics.[10] The phase response \phi(\omega) of a minimum-phase filter is uniquely determined by its magnitude response through the Hilbert transform relation in the discrete domain:\phi(\omega) = -\mathcal{H} \left[ \ln |H(e^{j\omega})| \right],
where \mathcal{H} denotes the discrete Hilbert transform. This relation arises from the analytic properties of the log-frequency response on the unit circle and guarantees that the phase is the negative Hilbert transform of the log-magnitude, enabling direct computation of the phase from magnitude specifications in filter design.[15] A representative example is a second-order infinite impulse response (IIR) Butterworth lowpass filter designed via the bilinear transform with a normalized cutoff frequency of \omega_c = \pi/2. Its transfer function is
H(z) = \frac{1}{2 + \sqrt{2}} \cdot \frac{(1 + z^{-1})^2}{1 + \frac{2 - \sqrt{2}}{2 + \sqrt{2}} z^{-2}},
featuring a double zero at z = -1 (on the unit circle) and complex conjugate poles inside the unit circle. The phase response at the cutoff frequency is \phi(\pi/2) = -90^\circ, which can be computed by evaluating H(e^{j\omega}) and taking the argument, illustrating the smooth phase transition typical of such designs.[16] In digital filter design for sampling applications, the minimum-phase property facilitates stable reconstruction by ensuring the filter's inverse is causal and stable, allowing effective phase compensation in decimation or interpolation stages to suppress aliasing artifacts without introducing instability or excessive delay. This is particularly beneficial in multirate systems, where precise timing alignment aids in faithful signal recovery from oversampled data.[10]