Anti-aliasing filter
An anti-aliasing filter is a low-pass analog filter designed to attenuate signal frequencies above the Nyquist frequency—half the sampling rate—prior to analog-to-digital conversion, thereby preventing aliasing, a distortion where high-frequency components masquerade as lower frequencies in the sampled data.[1][2] This filter ensures compliance with the Nyquist-Shannon sampling theorem, which states that a signal must be sampled at least twice the rate of its highest frequency component to accurately reconstruct it without loss.[1][3] By restricting the signal's bandwidth, the filter eliminates unwanted high-frequency noise, such as powerline interference or radio signals, that could otherwise corrupt measurements.[2] Aliasing arises when the sampling rate is insufficient relative to the input signal's frequency content, causing spectral folding where frequencies above the Nyquist limit appear as false lower-frequency signals, potentially leading to erroneous data interpretation in systems like audio recording or sensor acquisition.[1][3] In practice, an ideal anti-aliasing filter would sharply cut off all frequencies beyond the Nyquist point while passing lower ones unattenuated, but real-world implementations feature a transition band of gradual roll-off to balance performance and feasibility.[2] The cutoff frequency is typically set at or near the Nyquist frequency, with the sampling rate chosen to exceed twice the highest frequency in the transition band for effective attenuation.[2][3] Common types of anti-aliasing filters include Butterworth filters, which provide the flattest passband response for minimal distortion; Bessel filters, valued for their linear phase characteristics that preserve waveform shape; Chebyshev filters, offering steeper roll-off at the cost of passband ripple; and elliptic filters, which achieve the sharpest transitions but introduce both passband and stopband ripples.[1] Simpler designs, such as single-pole RC filters or higher-order active filters using operational amplifiers, are often employed in cost-sensitive applications.[1][3] These filters find widespread use in data acquisition systems, audio processing (e.g., for sampling rates of 44.1 kHz in compact discs to capture up to 20 kHz human hearing range), telecommunications, and imaging devices, where accurate signal fidelity is paramount.[2][1] In modern integrated circuits, anti-aliasing functionality is sometimes embedded within analog-to-digital converters to streamline design and reduce external components.[4]Fundamentals
Aliasing Phenomenon
Aliasing refers to the distortion in sampled signals where frequencies above the Nyquist frequency (half the sampling rate) fold back into the lower-frequency range, appearing as false lower-frequency components or artifacts in the reconstructed signal.[5] This frequency folding arises because sampling replicates the original signal's spectrum at multiples of the sampling frequency f_s, causing overlap and misrepresentation of high-frequency content within the baseband from 0 to f_s/2.[6] The phenomenon was formalized in the context of signal processing by Harry Nyquist in 1928, who analyzed its implications for telegraph transmission and established the critical role of sampling rate in avoiding such distortions.[7] The Nyquist-Shannon sampling theorem later provides the theoretical limit for preventing aliasing by requiring a sampling rate at least twice the signal's highest frequency.[5] A classic example involves sampling a sine wave of frequency f at a rate f_s < 2f. The resulting samples can be interpreted as a lower-frequency sine wave with aliased frequency f_\text{alias} = |f - k \cdot f_s|, where k is the integer closest to f / f_s that maps f_\text{alias} into [0, f_s/2]; equivalently, f_\text{alias} = \left| f - \round\left( \frac{f}{f_s} \right) f_s \right|.[8] In the frequency domain, this manifests as spectral folding around f_s/2, where the spectrum beyond this point mirrors back into the principal range, distorting the original signal's characteristics.[6] In visual signals, such as digital images, aliasing produces moiré patterns—interference fringes resembling wavy lines or grids—when fine spatial details exceed the sampling resolution of pixels or sensors.[9] In auditory signals, it generates unwanted tones, often perceived as harsh whistles or inharmonic artifacts, as high-frequency audio components masquerade as audible low frequencies during playback.[10]Nyquist-Shannon Sampling Theorem
The Nyquist-Shannon sampling theorem provides the fundamental limit for converting a continuous-time signal into a discrete-time representation without loss of information. A continuous-time signal is defined as bandlimited with bandwidth B if its Fourier transform X(f) satisfies X(f) = 0 for all |f| > B.[11] The theorem states that such a bandlimited signal can be perfectly reconstructed from its samples taken at a uniform sampling rate f_s > 2B, where $2B is known as the Nyquist rate.[11] The theorem originated from Harry Nyquist's 1928 analysis of telegraph transmission, where he determined that a channel of bandwidth W Hz can support a maximum signaling speed of $2W symbols per second, implying a minimum of $2W samples per second to uniquely represent the signal.[12] Claude Shannon formalized and proved the theorem in 1949, extending it to general communication channels and demonstrating perfect reconstruction via sinc interpolation.[11] The reconstruction formula expresses the original signal x(t) as an infinite sum over the samples x(n/f_s): x(t) = \sum_{n=-\infty}^{\infty} x\left(\frac{n}{f_s}\right) \ sinc\left(f_s \left(t - \frac{n}{f_s}\right)\right), where the normalized sinc function is defined as \ sinc(u) = \frac{\sin(\pi u)}{\pi u}.[11] This interpolation ensures that the signal is uniquely determined by its samples when the Nyquist rate is satisfied. In the frequency domain, uniform sampling at rate f_s causes the spectrum of the discrete-time signal to consist of periodic replications of the original continuous-time spectrum, repeated every f_s Hz.[13] If f_s > 2B, these replicas do not overlap, preserving the baseband spectrum for accurate reconstruction; otherwise, aliasing distorts the signal.[13]Purpose of Anti-Aliasing Filters
Anti-aliasing filters are essential components in sampling systems, designed to prevent the aliasing distortion that arises when high-frequency signal components fold back into the lower frequency band during digitization. By bandlimiting the input signal to frequencies below half the sampling rate (the Nyquist frequency, f_s/2), these filters ensure that the sampled representation accurately captures the original signal without introducing spurious low-frequency artifacts.[3][14] In practice, real-world analog signals are not inherently bandlimited and contain frequency content extending indefinitely, which violates the assumptions of the Nyquist-Shannon sampling theorem and risks aliasing upon sampling. Anti-aliasing filters address this by approximating an ideal low-pass response, attenuating components above f_s/2 to make the signal effectively bandlimited before it reaches the analog-to-digital converter (ADC). The ideal frequency response of such a filter is given by H(f) = \begin{cases} 1 & |f| < f_s/2 \\ 0 & |f| \geq f_s/2 \end{cases} which represents a perfect brick-wall cutoff; however, practical implementations feature a transitional roll-off in gain to achieve feasible attenuation.[15][3] These filters are typically implemented in the analog domain immediately before the sampler to suppress high frequencies at the source, though in oversampled or multi-rate systems, digital low-pass filters are applied after high-rate sampling to attenuate high frequencies before downsampling. While necessary for enabling accurate signal reconstruction, the filtering process introduces inherent trade-offs, including phase distortion from non-linear phase responses and partial attenuation of frequencies near the cutoff, which can slightly degrade the temporal and spectral fidelity of the preserved signal content. Despite these compromises, the benefits in aliasing suppression far outweigh the drawbacks for reliable digitization.[15][3]Design Principles
Low-Pass Filter Basics
A low-pass filter is a signal processing component that permits signals with frequencies from direct current (DC) up to a designated cutoff frequency to pass with little or no attenuation, while progressively attenuating frequencies above the cutoff.[16] This behavior ensures that the output spectrum is confined to lower frequencies, making low-pass filters essential for applications requiring bandlimitation, such as preventing aliasing in sampled systems.[16] Low-pass filters are categorized into analog and digital types, with analog variants typically employed prior to sampling in anti-aliasing setups due to their continuous-time operation on real-world signals. Analog low-pass filters include passive designs, such as resistor-capacitor (RC) circuits, and active configurations utilizing operational amplifiers (op-amps) for enhanced performance like higher gain or impedance matching. In contrast, digital low-pass filters encompass finite impulse response (FIR) structures, which are non-recursive and offer linear phase characteristics, and infinite impulse response (IIR) designs, which are recursive and more computationally efficient but potentially nonlinear in phase.[16][17] The transfer function for a first-order analog low-pass filter, a fundamental building block, is expressed in the s-domain asH(s) = \frac{1}{1 + \frac{s}{\omega_c}},
where \omega_c denotes the cutoff angular frequency in radians per second.[18] In the frequency domain, substituting s = j\omega yields the response
H(j\omega) = \frac{\omega_c}{j\omega + \omega_c},
with magnitude
|H(j\omega)| = \frac{1}{\sqrt{1 + \left( \frac{\omega}{\omega_c} \right)^2}},
which rolls off at -20 dB per decade beyond \omega_c, and phase shift
\phi(\omega) = -\arctan\left( \frac{\omega}{\omega_c} \right),
shifting from 0° at low frequencies to -90° at high frequencies.[18] In the time domain, the impulse response of a first-order low-pass filter demonstrates exponential decay, given by
h(t) = \omega_c e^{-\omega_c t} u(t),
where u(t) is the unit step function, reflecting the filter's smoothing effect over a time constant \tau = 1/\omega_c. This decay characterizes the filter's memory, with the response lingering briefly after an impulse input before settling.[16]
Cutoff Frequency and Transition Band
In anti-aliasing filters, the cutoff frequency is typically selected to be at the Nyquist frequency, f_s / 2, where f_s is the sampling rate, to ensure the signal is bandlimited to prevent frequencies above this threshold from causing aliasing during digitization.[19] This placement aligns the filter's passband edge with the highest frequency that can be accurately represented in the sampled domain, as dictated by the Nyquist-Shannon sampling theorem.[20] However, due to practical filter imperfections, the effective cutoff is often set slightly below f_s / 2 to accommodate the transition band's roll-off characteristics.[21] The transition band in an anti-aliasing filter refers to the frequency region between the passband edge and the stopband onset, where the filter's attenuation gradually increases from minimal to significant levels, determining the overall sharpness of the frequency response.[22] The width of this band is influenced by the filter type (e.g., Butterworth or Chebyshev) and order, with narrower transitions requiring more complex designs to achieve steeper roll-off without excessive passband distortion.[23] A wider transition band allows for simpler, lower-order filters but may permit some higher-frequency components to partially alias if not sufficiently attenuated. To balance aliasing prevention with practical constraints, the minimum sampling rate is often designed using the equation f_s \geq 2(B + G), where B is the signal bandwidth of interest and G is a guard band accounting for the transition region's width to ensure adequate attenuation before the Nyquist frequency.[24] Factors such as sampling rate limitations, available hardware components, and desired attenuation levels influence this selection; for instance, tighter sampling rate budgets necessitate wider guard bands, while component availability may limit achievable transition sharpness.[25] Achieving a narrower transition band generally demands higher-order filters to meet performance requirements without increasing f_s excessively.[26] In audio applications, for a standard sampling rate of 44.1 kHz targeting a bandwidth up to 20 kHz (human hearing limit), the cutoff frequency is set near 22 kHz, with a typical transition band width of 2-5 kHz to provide the necessary guard against aliasing while maintaining audio fidelity.[27] This configuration ensures that ultrasonic frequencies above 22 kHz are sufficiently attenuated, preventing distortion in the audible range.[24]Filter Order and Attenuation
The filter order refers to the number of poles (and corresponding zeros in some designs) in the transfer function of the filter, which determines the steepness of the transition from the passband to the stopband.[15] Higher orders produce a sharper roll-off, allowing better suppression of frequencies above the cutoff while preserving the signal of interest. For a Butterworth filter, commonly used in anti-aliasing due to its maximally flat passband response, each additional order increases the roll-off rate by 6 dB per octave or 20 dB per decade.[1][28] Thus, an nth-order Butterworth filter achieves a roll-off of 20n dB per decade.[15] The magnitude response of an nth-order low-pass Butterworth filter is given by |H(j\omega)|^2 = \frac{1}{1 + \left( \frac{\omega}{\omega_c} \right)^{2n}} where \omega is the angular frequency and \omega_c is the cutoff angular frequency.[29] This formulation ensures a -3 dB attenuation at the cutoff frequency and progressively steeper attenuation beyond it, critical for minimizing aliasing by rejecting out-of-band components.[28] To effectively suppress aliasing artifacts, the stopband must provide sufficient attenuation, typically at least 40-60 dB, to ensure aliased components fall below the system's noise floor and do not degrade signal fidelity.[30] For instance, in 12-bit systems, attenuations around 74 dB are targeted to match the dynamic range, preventing aliases from exceeding 0.1% of the signal energy.[30] This level keeps distortion inaudible or imperceptible in applications like audio and imaging.[31] However, increasing the filter order introduces trade-offs, including higher implementation complexity due to more components or computational demands, greater group delay that can distort signal timing, and increased potential for ringing artifacts known as the Gibbs phenomenon near sharp transitions.[32][33] These effects arise because higher-order filters exhibit more oscillatory impulse responses, balancing alias rejection against overall system stability and phase linearity.[32] In practice, anti-aliasing filters for audio and imaging applications often employ 4th- to 8th-order designs to achieve less than 1% aliasing energy while maintaining feasible complexity.[34] For example, cascading second-order sections can realize an 8th-order Butterworth filter suitable for sampling rates where stopband rejection exceeds 60 dB within a narrow transition band.[15] This order range provides adequate performance without excessive ringing or delay in real-world systems.[35]Audio Applications
Standard Implementation
In standard audio applications, the anti-aliasing filter is positioned as an analog low-pass filter immediately before the analog-to-digital converter (ADC) in recording equipment to attenuate frequencies above the Nyquist limit and prevent aliasing distortion during digitization.[1] This placement ensures that only the desired audio bandwidth enters the sampling process, preserving signal integrity in professional and consumer recording chains.[36] The use of anti-aliasing filters in digital audio emerged in the early 1980s alongside the commercialization of pulse-code modulation (PCM) systems, coinciding with the introduction of the Compact Disc (CD) format in 1982.[37] Early consumer-grade processors, such as Sony's PCM-F1 released in 1981, employed simple RC low-pass filters for anti-aliasing, which provided basic attenuation but often suffered from insufficient steepness, leading to audible artifacts in high-frequency content.[38] These rudimentary designs were later upgraded by specialists like Apogee Electronics to more sophisticated analog filters, marking a transition toward higher-performance implementations in studio environments.[38] For CD audio with a sampling rate of 44.1 kHz, typical anti-aliasing filters feature a cutoff frequency between 20 and 22 kHz to align with human auditory limits while allowing a narrow transition band up to the Nyquist frequency of 22.05 kHz.[39] Higher-order filters, such as 6th- to 9th-order elliptic or Chebyshev designs, are commonly used to achieve >60 dB stopband attenuation to suppress ultrasonic noise and intermodulation products effectively while minimizing passband distortion.[40] These filters provide a balance between sharp roll-off and manageable phase distortion, ensuring minimal impact on the audible 20 Hz to 20 kHz range.[41] These filters significantly reduce intermodulation distortion by blocking frequencies that could fold back into the audio band; for instance, without filtering, a 25 kHz tone sampled at 44.1 kHz would alias to approximately 19 kHz, creating false low-frequency artifacts indistinguishable from legitimate content.[42] In practice, well-implemented higher-order filters can attenuate such a tone by over 60 dB, rendering it inaudible and maintaining transparency in the final recording.[39] In symmetric digital audio workflows, anti-aliasing filters are often paired with reconstruction filters (also known as anti-imaging filters) following the digital-to-analog converter (DAC) to remove spectral images generated during playback.[39] This integrated approach, using analogous low-pass designs on both ends, ensures bidirectional fidelity in recording and reproduction systems, with early 1980s processors like the PCM-F1 incorporating basic versions of both for end-to-end signal protection.[38]Oversampling Techniques
Oversampling techniques in audio applications for anti-aliasing involve sampling the analog signal at a higher rate f_s' = K \cdot f_s, where K > 1 is the oversampling factor and f_s is the base Nyquist rate (twice the maximum signal frequency of interest), followed by digital low-pass filtering and decimation back to f_s. This process relaxes the demands on the preceding analog anti-aliasing filter by expanding the allowable transition band from the signal bandwidth f_s / 2 to K \cdot f_s / 2 - f_s / 2, enabling the use of less aggressive filter designs with shallower roll-off slopes.[1][43] A key advantage is that lower-order analog filters—often first- or second-order RC networks—can achieve sufficient attenuation, as the transition band width approximates (K - 1) \cdot f_s / 2. The effective suppression of aliases improves by the factor K, since potential aliasing components are shifted to higher frequencies beyond the expanded Nyquist limit. In delta-sigma ADCs, oversampling pairs with noise shaping to push quantization noise out of the signal band, yielding an SNR improvement of approximately $10 \log_{10} K dB from the noise spreading alone, plus further gains (e.g., 9 dB/octave for first-order shaping) concentrated outside the band of interest.[44] For example, in professional audio workflows based on compact disc standards, 4× oversampling raises the rate to 176.4 kHz from a 44.1 kHz base, allowing gentler analog filters while digital decimation handles precise band-limiting. Modern DAC implementations, such as those in ESS Technology's Sabre series (e.g., ES9038PRO), employ internal oversampling ratios such as 8× in the FIR stage via proprietary HyperStream delta-sigma modulators to minimize aliasing distortion and enhance dynamic range.[45] While oversampling increases data throughput and processing demands during the initial high-rate acquisition, digital decimation filters effectively attenuate any residual out-of-band aliases post-conversion, mitigating these challenges in integrated audio systems.[1]Bandpass Signals and Overload Prevention
In scenarios involving bandpass audio signals confined to a frequency range [f_1, f_2], where f_2 < f_s/2 and f_1 > 0, a bandpass anti-aliasing filter is preferred over a low-pass filter. This design choice rejects low-frequency noise below f_1, which might otherwise contribute to unnecessary attenuation or interference, while attenuating high frequencies above f_2 to prevent aliasing into the desired band.[30][7] The filter's passband is typically centered at (f_1 + f_2)/2 to align with the signal's spectral content, ensuring minimal distortion within the band of interest. For instance, in digital telephony systems, voice signals occupy 300 Hz to 3.4 kHz and are sampled at 8 kHz; a bandpass anti-aliasing filter limits the input accordingly, preserving speech intelligibility while complying with the Nyquist criterion.[46][47] High-amplitude signals with broadband energy can overload the ADC, leading to clipping and distortion; anti-aliasing filters mitigate this by restricting bandwidth and reducing overall peak power. The filtered signal power must satisfy P_\text{filtered} = \int_{-\infty}^{\infty} |S(f)|^2 |H(f)|^2 , df < P_\text{ADC full-scale}, where S(f) is the input spectrum and H(f) is the filter response, ensuring the input stays within the converter's dynamic range.[1] In live sound mixing applications, such filters prevent intermodulation distortion (IMD) from harmonic content generated in microphone preamplifiers, which could overload the ADC and introduce nonlinear artifacts into the digital mix. Oversampling serves as a complementary technique for additional overload mitigation by providing headroom beyond standard Nyquist limits.[1]Optical and Imaging Applications
In Photographic and Sensor Systems
In photographic and sensor systems, optical anti-aliasing filters serve as low-pass filters positioned in front of the image sensor to mitigate spatial aliasing, where high-frequency scene details exceeding the sensor's Nyquist frequency—defined as \frac{1}{2 \times \text{pixel pitch}}—fold back into lower frequencies, distorting the captured image. These filters introduce a controlled blur to attenuate spatial frequencies above this limit, ensuring that the sampled signal adheres to the Nyquist-Shannon sampling theorem in the spatial domain.[48][49] The primary mechanism involves birefringent materials, such as quartz crystals, which exploit the material's anisotropic refractive index to split incoming light rays into orthogonally polarized components displaced by sub-pixel amounts, typically creating a multi-spot point spread function that smears fine details without excessive overall softening. This approach, common in digital single-lens reflex (DSLR) and mirrorless cameras, targets the sensor's pixel grid to suppress aliasing while preserving as much resolution as possible below the Nyquist frequency. Early implementations in professional digital cameras, such as Kodak's DCS series introduced in the early 1990s, employed such filters—often as removable modules—to address aliasing in high-megapixel sensors, with some models exploring diffraction gratings as an alternative for generating the low-pass effect through controlled light scattering.[50][51][50] Without an optical anti-aliasing filter, repeating high-frequency patterns in the scene—such as textiles, architectural grids, or printed halftones—can produce prominent moiré fringes, manifesting as false color artifacts or undulating interference patterns; for instance, photographing a finely woven fabric might result in rainbow-like bands aliasing across the image due to interference between the pattern's periodicity and the sensor's sampling grid. These effects are exacerbated in color sensors with Bayer filter arrays, where spatial undersampling of chrominance channels amplifies the distortion.[48][52] In terms of design, birefringent filters typically consist of two or three precisely oriented crystal plates that induce a phase shift, displacing light rays by approximately half a pixel in both horizontal and vertical directions to form a symmetric four-spot pattern, which ideally reduces modulation transfer function (MTF) response to near zero at the Nyquist frequency while providing about 50% attenuation just below it for effective aliasing suppression. This configuration balances the trade-off between reduced sharpness (from the induced blur) and artifact prevention, with the filter's thickness and orientation tuned to the sensor's pixel pitch, typically around 9-16 μm in 1990s models.[50] Contemporary trends reflect advancements in sensor technology and processing power, with some high-resolution mirrorless cameras like the Sony α7R series (starting with the original α7R in 2013) forgoing the optical anti-aliasing filter entirely to capture maximum detail, accepting potential moiré in exchange for enhanced acuity and relying on sophisticated in-camera de-mosaicing algorithms or post-processing software to mitigate artifacts selectively. This shift is enabled by smaller pixel pitches (e.g., below 5 μm) and lens modulation transfer functions that naturally attenuate some high frequencies, reducing the necessity for hardware blurring in scenarios prioritizing sharpness over perfect aliasing control. As of 2025, many manufacturers, including Fujifilm with its X-Trans sensors and Canon in recent EOS R models, continue to omit optical filters in favor of computational anti-aliasing techniques, leveraging AI for selective moiré removal.[53]In Digital Rendering and Displays
In digital rendering, anti-aliasing filters mitigate aliasing artifacts that arise from the discrete sampling of continuous geometric scenes onto a pixel grid, where high-frequency spatial details, such as diagonal edges, manifest as undesirable stair-stepping or moiré patterns. This phenomenon stems from the Nyquist-Shannon sampling theorem applied to computer graphics: if the sampling frequency is less than twice the highest frequency in the scene, spectral overlap occurs, folding high frequencies into lower ones and distorting the reconstructed image.[54] To prevent this, anti-aliasing techniques aim to bandlimit the signal prior to sampling or increase the effective sampling rate, ensuring faithful reconstruction via low-pass filtering. A foundational approach is supersampling, which renders the scene at a higher resolution—typically 2x or 4x the target—before downsampling to the final pixel grid, effectively averaging multiple samples to approximate the continuous integral over each pixel's area. Multisample anti-aliasing (MSAA), an optimization for GPU pipelines, extends this by taking multiple geometry samples (e.g., depth and coverage) per pixel during rasterization while shading only once per pixel, then resolving the samples into a final color via averaging.[55] The resolved pixel color is computed as the uniform average of the samples:\text{pixel\_color} = \frac{1}{N} \sum_{i=1}^{N} \text{sample}_i
where N is the number of samples per pixel.[56] This method efficiently reduces edge aliasing in real-time rendering, with hardware support introduced in consumer GPUs around 2001.[57] Implementations in graphics APIs like OpenGL and DirectX enable MSAA through multisample framebuffer attachments, where developers specify sample counts (e.g., 4x or 8x) to balance quality and performance. Post-processing alternatives, such as NVIDIA's Fast Approximate Anti-Aliasing (FXAA), operate in screen space via pixel shaders, detecting and blurring aliased edges using a 3x3 local filter without multisampling hardware.[58] FXAA applies a low-pass filter to sub-pixel aliasing, averaging neighboring pixel luma values to smooth transitions while preserving sharpness, typically at under 1 ms per frame on modern hardware.[58] Professional GPUs, such as NVIDIA Quadro series, support up to 8x MSAA for computer-aided design (CAD) applications, minimizing jagged lines in wireframe models without excessive performance overhead. Display technologies complement rendering-side anti-aliasing; for instance, LCD panels employ subpixel rendering to enhance text and edge smoothness by independently addressing red, green, and blue subpixels within each pixel, effectively tripling horizontal resolution for anti-aliased content. Microsoft's ClearType, introduced in 2001, optimizes this via linear filters tuned to human vision models, converting glyph outlines to subpixel intensities for crisper appearance on fixed-pixel LCDs.[59] The evolution of these techniques traces from 1990s texture anti-aliasing via mipmapping—precomputed pyramid levels of textures at power-of-two scales to match viewing distance and avoid minification aliasing, as pioneered in Lance Williams' 1983 work—to contemporary AI-driven methods like NVIDIA's Deep Learning Super Sampling (DLSS) in the 2020s. DLSS uses convolutional neural networks trained on high-quality renders to upscale lower-resolution frames with temporal data, achieving supersampling-like quality at reduced compute cost.[60]