Finite impulse response
A finite impulse response (FIR) filter is a type of digital filter characterized by an impulse response of finite duration, meaning the filter's output in response to an impulse input returns to zero after a limited number of samples.[1] This property arises from the filter's structure, which computes the output as a finite convolution sum y = \sum_{k=0}^{M} h x[n-k], where h are the fixed impulse response coefficients of length M+1, and no feedback is involved.[2] FIR filters are inherently stable due to the absence of poles in their transfer function H(z) = \sum_{k=0}^{M} h z^{-k}, ensuring bounded input always produces bounded output without risk of divergence.[3] A key advantage is their ability to achieve exactly linear phase when the coefficients are symmetric or antisymmetric, preserving the shape of the input signal with only a constant time delay, which is crucial for applications requiring minimal distortion.[2] They are also straightforward to implement on digital signal processors using direct-form structures like tapped delay lines, and they exhibit low sensitivity to coefficient quantization errors.[4] In practice, FIR filters are widely used in digital signal processing for tasks such as low-pass, high-pass, and band-pass filtering to remove noise or extract frequency components from signals in audio processing, image enhancement, communications, and biomedical engineering.[5] Design methods, including windowing, frequency sampling, and optimal approaches like Parks-McClellan, allow precise control over frequency response characteristics, though longer filter lengths increase computational demands.[3] Compared to infinite impulse response (IIR) filters, FIR designs trade efficiency for guaranteed stability and phase linearity, making them preferable in scenarios where these properties are paramount.[6]Fundamentals
Definition
A finite impulse response (FIR) filter is a type of digital filter characterized by its feedforward structure, where the output at any time depends solely on the current and a finite number of past input samples, without any feedback from previous outputs.[1][7] This non-recursive nature ensures that the filter's response to an impulse input settles to zero after a limited number of samples, distinguishing it from filters with infinite-duration responses.[8] The impulse response of an FIR filter, denoted as h, is nonzero only over a finite interval, typically $0 \leq n \leq N-1 for a filter of length N, and zero elsewhere.[9] In a basic block diagram, the input signal x passes through a series of delay elements forming a tapped delay line, where each tap is multiplied by the corresponding coefficient h (for k = 0 to N-1), and the results are summed to produce the output y.[1] This structure implements a weighted sum of recent inputs, enabling precise control over the filter's behavior. FIR filters gained prominence in the 1970s alongside advancements in digital signal processing, though their conceptual roots trace back to early 20th-century analog filter theory, such as transversal filters used in early communication systems.[10][11] Unlike infinite impulse response (IIR) filters, which incorporate feedback and can exhibit infinite-duration responses, FIR filters maintain inherent stability due to their finite support.[12]Comparison to Infinite Impulse Response Filters
Finite impulse response (FIR) filters differ fundamentally from infinite impulse response (IIR) filters in their structure and behavior. IIR filters incorporate feedback mechanisms, where the output is recursively dependent on previous outputs, resulting in an impulse response of theoretically infinite duration due to the presence of poles in the z-plane away from the origin.[13] This feedback enables IIR filters to achieve sharp frequency responses with lower filter orders compared to FIR filters, making them computationally efficient for applications requiring high selectivity, such as in speech processing where linear phase is not essential.[14] However, the feedback introduces risks of instability if poles lie outside the unit circle in the z-plane, and IIR filters typically exhibit nonlinear phase responses, which can distort signal timing.[15] In contrast, FIR filters are non-recursive, relying solely on current and past inputs without feedback, ensuring all poles are at the origin in the z-plane and thus providing inherent stability regardless of coefficient values.[16] A key advantage of FIR filters is their ability to achieve exact linear phase by designing symmetric or antisymmetric impulse response coefficients, preserving the waveform shape in applications like audio equalization and image processing.[2] While FIR filters often require higher orders—and thus more computational resources—for comparable sharpness, their stability and phase linearity make them preferable in scenarios demanding precise signal integrity.[6]| Criterion | FIR Filters | IIR Filters |
|---|---|---|
| Stability | Inherently stable (poles only at z=0).[16] | Potentially unstable if poles are outside the unit circle.[15] |
| Phase Response | Exact linear phase possible with symmetric coefficients.[2] | Generally nonlinear phase, leading to potential distortion.[14] |
| Computational Cost | Higher due to larger number of coefficients and multiplications per output.[16] | Lower for sharp responses, as fewer coefficients suffice.[17] |
| Use Cases | Preferred for linear phase needs, e.g., audio and image processing.[3] | Suited for efficiency in real-time systems like speech and control.[14] |
Mathematical Formulation
Time-Domain Representation
In the time domain, a finite impulse response (FIR) filter is characterized by its output being a finite weighted sum of the current and past input samples, without any dependence on previous outputs.[18] This non-recursive structure distinguishes FIR filters from infinite impulse response (IIR) filters and ensures a finite duration for the impulse response.[19] The fundamental mathematical model for an FIR filter is given by the difference equation: y = \sum_{k=0}^{M} h \, x[n - k] where y is the output signal at discrete time index n, x[n - k] represents the input signal shifted by k samples, h are the filter coefficients for k = 0, [1](/page/1), \dots, M, M is the filter order, and the filter length is M+1.[18] This equation embodies the convolution sum, which computes each output sample as a linear combination of the most recent M+1 input samples, weighted by the fixed coefficients h.[19] The coefficient vector \mathbf{h} = [h{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, h{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}}, \dots, h[M]] defines the filter's characteristics, such as its frequency selectivity or smoothing behavior, by determining how much each past input contributes to the current output.[18] For instance, equal coefficients might implement a simple moving average, while varying coefficients can approximate ideal low-pass or high-pass responses.[19] A practical example is a length-3 FIR filter (order 2), with difference equation: y = h{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} x + h{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} x[n-1] + h{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} x[n-2] This filter processes the current input x along with the two preceding inputs, producing an output that blends them according to the weights h{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, h{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}}, and h{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}}.[19]Z-Transform and Transfer Function
The z-transform provides a powerful frequency-domain representation for analyzing finite impulse response (FIR) filters, converting the time-domain impulse response into a rational function in the complex variable z. For an FIR filter of order M (length M+1), the impulse response h is nonzero only for $0 \leq n \leq M, and its z-transform is given by H(z) = \sum_{k=0}^{M} h z^{-k}, which is a polynomial of degree M in z^{-1}.[20] This formulation arises directly from the definition of the z-transform applied to the finite-duration sequence h.[21] As a transfer function, H(z) characterizes the FIR filter as an all-zero structure, with all poles located at the origin z=0 (of multiplicity M) and no poles elsewhere in the finite z-plane.[20] The roots of the numerator polynomial, known as the zeros of H(z), determine the filter's frequency-shaping properties, allowing designers to place zeros strategically to attenuate specific frequencies or achieve desired passband characteristics through root-finding techniques.[22] This all-zero nature contrasts with infinite impulse response (IIR) filters and ensures inherent stability, as the region of convergence includes the entire z-plane except possibly at z=0.[23] The relationship between the z-domain and time domain is bidirectional: applying the inverse z-transform to H(z) recovers the original impulse response coefficients h, which are simply the coefficients of the polynomial when expressed in powers of z^{-1}.[20] For visualization, the pole-zero plot of a simple FIR filter, such as a two-tap averager with H(z) = 1 + z^{-1}, reveals a single zero at z = -1 and a pole at z = 0 (multiplicity 1), illustrating how the zeros alone dictate the filter's response while the origin pole reflects the finite duration.[23]Properties
Stability and Linearity
Finite impulse response (FIR) filters possess inherent bounded-input bounded-output (BIBO) stability, meaning that any bounded input sequence produces a bounded output sequence.[24] This property arises because the FIR filter's impulse response h is of finite duration, typically nonzero only for $0 \leq n \leq M-1 for some finite M, ensuring that the output is a finite weighted sum of past and present input samples.[25] To demonstrate BIBO stability formally, consider the output of an FIR filter given by the convolution sum: y = \sum_{k=0}^{M-1} h \, x[n-k]. Assume the input x is bounded such that |x| \leq B < \infty for all n, where B is a positive constant. Then, the magnitude of the output satisfies |y| \leq \sum_{k=0}^{M-1} |h| \, |x[n-k]| \leq B \sum_{k=0}^{M-1} |h| = B \, H, where H = \sum_{k=0}^{M-1} |h| < \infty since there are only finitely many terms and each |h| is finite. Thus, |y| \leq B H < \infty for all n, confirming BIBO stability regardless of the specific coefficients, as long as they are finite.[26] This contrasts with infinite impulse response (IIR) filters, which require additional conditions like poles inside the unit circle to ensure stability.[25] FIR filters are linear time-invariant (LTI) systems, inheriting the linearity property from the convolution operation that defines their input-output relationship. Linearity implies the superposition principle: if inputs x_1 and x_2 produce outputs y_1 and y_2, respectively, then the input a x_1 + b x_2 (for scalars a and b) yields output a y_1 + b y_2. This holds because convolution is a linear operation: scaling and adding the inputs before convolving with the fixed impulse response h is equivalent to convolving each input separately and then scaling and adding the results.[27] The time-invariance of FIR filters ensures that a time-shifted input produces a correspondingly time-shifted output. Specifically, if x yields y, then the shifted input x[n - n_0] produces y[n - n_0]. This property stems from the convolution sum shifting uniformly with the input delay, as the impulse response h remains fixed and does not depend on absolute time.[28] Together, these LTI characteristics make FIR filters reliable for digital signal processing applications where predictable and distortion-free responses are essential.[27]Phase Response and Symmetry
Finite impulse response (FIR) filters can achieve linear phase by imposing symmetry or antisymmetry on their impulse response coefficients, which ensures that all frequency components of the input signal experience the same time delay.[29] Specifically, the linear phase condition is met when the coefficients satisfy h = h[M-1-n] for symmetric cases or h = -h[M-1-n] for antisymmetric cases, where M is the filter length and n = 0, 1, \dots, M-1.[30] This symmetry constrains the filter's frequency response, resulting in a phase that is a linear function of frequency. The four types of linear-phase FIR filters arise from combinations of symmetry and filter length parity:| Type | Symmetry | Length Parity | Key Characteristics |
|---|---|---|---|
| I | Symmetric | Odd (M odd) | Suitable for lowpass, highpass, bandpass; no inherent zeros at DC or Nyquist. |
| II | Symmetric | Even (M even) | Suitable for lowpass, bandpass; zero at Nyquist frequency. |
| III | Antisymmetric | Odd (M odd) | Suitable for differentiators, Hilbert transformers; zeros at DC and Nyquist. |
| IV | Antisymmetric | Even (M even) | Suitable for differentiators, Hilbert transformers; zero at DC. |
Frequency Response
Derivation from Impulse Response
The frequency response of a finite impulse response (FIR) filter is derived directly from the discrete-time Fourier transform (DTFT) of its impulse response h, which is nonzero only for a finite duration, typically $0 \leq n \leq M-1 for a causal filter of length M. This transform evaluates how the filter modifies the frequency content of an input signal. The DTFT of the impulse response is defined as H(e^{j\omega}) = \sum_{k=0}^{M-1} h e^{-j \omega k}, where \omega is the normalized angular frequency. This summation yields a complex-valued function H(e^{j\omega}), whose magnitude |H(e^{j\omega})| and phase \arg\{H(e^{j\omega})\} characterize the filter's gain and delay at each frequency. The expression arises from the general DTFT definition applied to the finite-length sequence h. This frequency response can also be obtained from the z-transform of the impulse response, H(z) = \sum_{k=0}^{M-1} h z^{-k}, by evaluating it on the unit circle in the z-plane, where z = e^{j\omega}. Substituting z = e^{j\omega} directly gives H(e^{j\omega}) = H(z)\big|_{z = e^{j\omega}}, linking the time-domain coefficients to the frequency-domain behavior through the polynomial nature of the FIR transfer function. This evaluation ensures the frequency response is periodic with period $2\pi and captures the filter's steady-state response to complex exponentials. For symmetric FIR filters, which exhibit linear phase and are common in applications requiring minimal distortion, the impulse response satisfies h = h[M-1-k] for k = 0, 1, \dots, M-1. To derive the form showing a real-valued effective magnitude, reindex the summation by letting m = (M-1)/2 (assuming M odd for Type I symmetry) and shifting the origin to the center: H(e^{j\omega}) = e^{-j \omega m} \sum_{k=-m}^{m} h[m + k] e^{-j \omega k}. Due to symmetry, h[m + k] = h[m - k], so the sum becomes \sum_{k=-m}^{m} h[m + k] e^{-j \omega k} = h + \sum_{k=1}^{m} h[m + k] \left( e^{-j \omega k} + e^{j \omega k} \right) = h + 2 \sum_{k=1}^{m} h[m + k] \cos(\omega k). The resulting sum is real-valued, as it involves only cosine terms (even functions). Thus, H(e^{j\omega}) = e^{-j \omega m} \cdot A(\omega), where A(\omega) is real, implying |H(e^{j\omega})| = |A(\omega)| (real and nonnegative) and a linear phase term -\omega m. This structure ensures constant group delay and simplifies design for distortionless filtering. Geometrically, H(e^{j\omega}) can be interpreted as the vector sum in the complex plane of M phasors, each with magnitude h and phase angle -\omega k. For a fixed \omega, the terms h e^{-j \omega k} are vectors rotating progressively by increments of -\omega radians, weighted by the coefficients h. The resultant vector's length and angle give the magnitude and phase of the frequency response, providing intuition for how tap weights and frequency influence constructive or destructive interference among the phasors.Magnitude and Phase Characteristics
The magnitude response of an FIR filter, denoted as |H(e^{j\omega})|, typically approximates the desired frequency-selective behavior but exhibits characteristic ripples due to the finite truncation of the ideal infinite impulse response. Near discontinuities in the ideal magnitude specification, such as the edges of passbands or stopbands in lowpass filters, the Gibbs phenomenon manifests as oscillatory overshoots and undershoots.[29] These ripples arise from the Fourier series approximation inherent in FIR design methods like windowing, where abrupt truncation in the time domain introduces sidelobes in the frequency domain. A key advantage of many FIR filters is their ability to achieve linear phase response through symmetric or antisymmetric impulse response coefficients, ensuring a constant group delay across all frequencies. The group delay, defined as \tau_g(\omega) = -\frac{d}{d\omega} \arg[H(e^{j\omega})], remains fixed at \frac{N-1}{2} samples for Type I and II linear-phase FIR filters of length N, where the phase is \theta(\omega) = -\frac{N-1}{2} \omega. This uniformity prevents phase distortion, preserving the waveform shape of signals passing through the filter, which is particularly beneficial in applications requiring temporal alignment, such as audio processing.[2] Designing FIR filters involves inherent trade-offs between response accuracy and computational demands. Increasing the filter order N narrows the transition band width and reduces the amplitude and extent of Gibbs ripples—but at the cost of higher arithmetic complexity, roughly proportional to N multiplications per output sample.[29] In practice, the ideal lowpass filter magnitude response, which drops sharply from unity to zero at the cutoff frequency, is approximated by FIR filters with a gradual roll-off in the transition band, with visible Gibbs oscillations near the cutoff, contrasting the brick-wall ideal.[2]Design Methods
Windowing Method
The windowing method for designing finite impulse response (FIR) filters involves deriving the filter coefficients by truncating an ideal infinite-duration impulse response and applying a finite-length window function to mitigate the effects of abrupt truncation. This approach approximates the frequency response of an ideal filter, such as a lowpass filter, by starting from its known time-domain form and ensuring linear phase through symmetric impulse responses.[31] For an ideal lowpass filter with cutoff frequency \omega_c (in radians per sample) and group delay \alpha = (M-1)/2 to center the impulse response for a filter of length M, the desired infinite impulse response is given by h_d = \frac{\sin(\omega_c (n - \alpha))}{\pi (n - \alpha)}, \quad -\infty < n < \infty. This sinc-like function arises from the inverse discrete-time Fourier transform of the ideal brick-wall frequency response. To obtain a finite-length FIR filter, h_d is truncated to $0 \leq n \leq M-1 and multiplied by a window function w of the same length, yielding the filter coefficients h = h_d \cdot w, \quad 0 \leq n \leq M-1. The resulting frequency response is the circular convolution of the ideal response with the Fourier transform of the window, which introduces ripples in the passband and stopband while smoothing the transition band. Normalization is typically applied so that the passband gain is unity, often by scaling h by $1 / \sum_{n=0}^{M-1} w.[31] Common window functions trade off between sidelobe levels (affecting passband/stopband ripple) and mainlobe width (affecting transition bandwidth). The rectangular window, defined as w = 1 for $0 \leq n \leq M-1, simply truncates the sinc function, producing the lowest transition width but the highest sidelobes (approximately -13 dB), leading to significant ripples. The Hamming window, w = 0.54 - 0.46 \cos(2\pi n / (M-1)), reduces sidelobes to about -43 dB by tapering the ends, at the cost of a wider mainlobe and thus broader transition band compared to the rectangular window. The Blackman window, w = 0.42 - 0.5 \cos(2\pi n / (M-1)) + 0.08 \cos(4\pi n / (M-1)), further suppresses sidelobes to around -58 dB, providing even lower ripple but requiring a longer filter length for the same transition width. These windows are particularly effective for audio and spectral applications where sidelobe attenuation is critical.[31] The design procedure consists of four main steps: (1) specify the ideal frequency response and compute the corresponding infinite h_d; (2) select the filter length M based on desired transition bandwidth (approximately $4\pi / M for rectangular windows, scaling with window type); (3) choose and apply a window w to the truncated h_d; and (4) normalize the coefficients for unity gain in the passband. This method is computationally simple and guarantees stability and linear phase but may require iterative adjustment of M and window type to meet ripple and transition specifications. For instance, the Hamming window offers a practical balance, narrowing effective sidelobe impact relative to rectangular while only moderately widening the transition compared to more aggressive tapers like Blackman.[31]Least Squares Method
The least squares method for designing finite impulse response (FIR) filters seeks to minimize the integrated squared error between the desired frequency response H_d(\omega) and the approximated frequency response H(\omega) over specified frequency bands, formulated as the optimization problem \min \int_{-\pi}^{\pi} |H_d(\omega) - H(\omega)|^2 \, d\omega, where the integral is typically weighted to emphasize passbands, stopbands, or transition regions. This mean square error (MSE) criterion provides a physically motivated measure of approximation quality, as the squared error integral corresponds to the energy of the deviation in the frequency domain. Unlike the windowing method, which can suffer from suboptimal control over ripple due to truncation artifacts, least squares optimization directly targets frequency-domain accuracy.[32][33] In practice, the continuous integral is discretized by evaluating the frequency response at a dense set of points \omega_k, leading to a linear system \mathbf{d} = \mathbf{A} \mathbf{h}, where \mathbf{d} contains the desired response samples, \mathbf{h} is the vector of FIR coefficients (exploiting symmetry for linear-phase designs), and \mathbf{A} is the design matrix with entries A_{k,m} = \cos(m \omega_k) for the real-valued amplitude response. The optimal coefficients are obtained by solving this overdetermined system in the least squares sense using the Moore-Penrose pseudoinverse: \hat{\mathbf{h}} = \mathbf{A}^\dagger \mathbf{d} = (\mathbf{A}^T \mathbf{A})^{-1} \mathbf{A}^T \mathbf{d}, which can be computed efficiently via Cholesky decomposition or QR factorization for large filter orders. This formulation supports arbitrary magnitude specifications by incorporating band-specific weights into the error metric, such as W(\omega) in the objective \int W(\omega) |H_d(\omega) - H(\omega)|^2 \, d\omega.[32] For designs requiring equiripple error characteristics rather than minimal MSE, the Parks-McClellan algorithm, based on the Remez exchange principle, can be adapted to approximate least squares solutions by iteratively refining extremal frequencies to balance weighted errors, though it primarily targets minimax optimality; direct least squares, in contrast, yields non-equiripple responses optimal under the L2 norm. The method's advantages lie in its MSE optimality, which often achieves lower average error and better energy concentration than equiripple designs for applications prioritizing overall fidelity over peak ripple, and its versatility for complex or multiband specifications without assuming uniform error distribution.[33]Frequency Sampling Method
The frequency sampling method for designing finite impulse response (FIR) filters involves specifying the desired frequency response at a set of equally spaced discrete frequencies and computing the corresponding impulse response coefficients via the inverse discrete Fourier transform (IDFT). This approach provides a straightforward way to approximate a target frequency response H_d(\omega), such as those characterized by magnitude and phase properties discussed earlier, by evaluating it at M points \omega_k = 2\pi k / M for k = 0, 1, \dots, M-1, where M is typically equal to the filter length N. The sampled values H(k) are set to match H_d(\omega_k) in passbands and stopbands, while transition band samples may be interpolated linearly or optimized separately.[34] The impulse response coefficients are then obtained as h = \frac{1}{M} \sum_{k=0}^{M-1} H(k) e^{j 2\pi k n / M}, \quad n = 0, 1, \dots, M-1. This IDFT computation directly yields a finite-length sequence suitable for linear-phase FIR filters when H(k) is chosen with conjugate symmetry. For implementation efficiency, the IDFT can be performed using the fast Fourier transform (FFT) algorithm, making the method computationally attractive.[34][35] Two primary variants exist to achieve linear phase while controlling the approximation quality. In the Type 1 variant, a full DFT of length M = N is used, sampling directly at integer multiples of the fundamental frequency, which supports symmetric impulse responses for even or odd N and ensures linear phase. The Type 2 variant incorporates frequency-domain zero insertion by placing zeros between the specified H(k) samples and padding to a longer DFT length L > M, effectively interpolating the frequency response with finer resolution and reducing sidelobe effects in the time domain, though at the cost of increased filter order. This zero-insertion technique improves the approximation for filters requiring sharper responses without resampling the original points.[35][36] The method's advantages include its simplicity and direct compatibility with FFT-based tools for both design and evaluation, enabling rapid prototyping of filters with arbitrary frequency specifications. However, it suffers from limitations in handling narrow transition bands, where the implicit sinc interpolation between samples can introduce significant passband ripple or stopband attenuation errors due to sparse sampling in those regions.[34][36]Implementation and Examples
Computational Structures
Finite impulse response (FIR) filters are commonly implemented using the transversal structure, also referred to as the direct form, which consists of a tapped delay line storing successive input samples and a set of multipliers applying the filter coefficients h to each delayed sample before summing the products to produce the output.[37] This structure directly realizes the convolution sum y = \sum_{k=0}^{M-1} h x[n-k], where M is the filter order, using M delay elements, M multipliers, and M adders.[37] For FIR filters, the direct form I and direct form II realizations are equivalent, as there are no feedback paths, differing only in the conceptual separation of numerator and denominator polynomials that applies primarily to infinite impulse response (IIR) designs.[37] The transposed direct form offers an alternative realization obtained by reversing the signal flow, interchanging input and output, and replacing delays with adders in a pipelined configuration, which is particularly advantageous for hardware implementations.[38] This structure reduces the number of adders in the summation chain, enabling better pipelining and higher throughput in field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) by distributing computations across pipeline stages.[38] Optimization techniques in the transposed form can further minimize adder complexity while preserving filter performance, making it suitable for fixed-coefficient filters in resource-constrained environments.[38] Linear-phase FIR filters, which possess symmetric or antisymmetric impulse responses (h = h[M-1-n] or h = -h[M-1-n]), allow exploitation of this property to reduce computational requirements by approximately half.[39] The symmetry enables pairing of coefficients in the convolution sum, such that terms like h x[n-k] + h[M-1-k] x[n-(M-1-k)] are computed as h (x[n-k] + x[n-(M-1-k)]), requiring only one multiplication per pair instead of two, along with additional adders for pre-summing the inputs.[39] For example, a length-7 Type 1 linear-phase filter reduces from 7 to 4 multipliers by decomposing the transfer function into symmetric components.[39] In terms of computational complexity, the direct and transposed forms require O(M) multiplications and additions per output sample, scaling linearly with the filter length M.[37] For very long filters where M is large, fast Fourier transform (FFT)-based implementations using block convolution techniques, such as overlap-add or overlap-save, can achieve sublinear complexity of approximately O(N \log N) operations per block, where N is the FFT size typically chosen greater than M, making them efficient for applications involving extended impulse responses.[40]Moving Average Example
The moving average filter serves as a fundamental example of a finite impulse response (FIR) filter, where the output is computed as the average of the most recent M input samples. This filter is defined by the difference equation y = \frac{1}{M} \sum_{k=0}^{M-1} x[n-k], with uniform coefficients h = \frac{1}{M} for k = 0, 1, \dots, M-1 and zero otherwise.[41][42] The impulse response of the moving average filter is a rectangular pulse of width M and height \frac{1}{M}, ensuring the total area is unity for DC gain preservation. This finite-duration response directly corresponds to the FIR property, as the filter's memory is limited to M samples.[41] The frequency response is derived from the discrete-time Fourier transform of the impulse response: H(e^{j\omega}) = \frac{1}{M} \frac{\sin(M \omega / 2)}{\sin(\omega / 2)} e^{-j \omega (M-1)/2}, which exhibits lowpass characteristics with a sinc-like magnitude envelope, unity gain at \omega = 0, and the first zero crossing at \omega = 2\pi / M. The linear phase term e^{-j \omega (M-1)/2} introduces a constant group delay of (M-1)/2 samples, preserving waveform shape without distortion.[41][42] In applications such as noise reduction for time series data, the moving average filter attenuates high-frequency noise components while smoothing the signal, optimally reducing uncorrelated white noise variance by a factor of $1/M (or standard deviation by $1/\sqrt{M}). Consider a simple example with M=3 and an input sequence representing a noisy step: x = \{0, 0, 1+\epsilon_1, 1+\epsilon_2, 1+\epsilon_3, 1+\epsilon_4, \dots\}, where \epsilon_i are small noise terms (e.g., \epsilon_1 = 0.2, \epsilon_2 = -0.1, \epsilon_3 = 0.3, \epsilon_4 = -0.2). The output is computed as follows:- For n=0: y{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} + x[-1] + x[-2]), but assuming zero-padding for initial samples, y{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0.
- For n=1: y{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} + x[-1]) = 0.
- For n=2: y{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}) = \frac{1 + 0.2 + 0 + 0}{3} \approx 0.4.
- For n=3: y{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}}) = \frac{0.9 + 1.2 + 0}{3} = 0.7.
- For n=4: y{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}}) = \frac{1.3 + 0.9 + 1.2}{3} \approx 1.13.
Subsequent outputs converge toward 1, demonstrating the filter's smoothing effect on the noise while tracking the underlying step transition.[41]