Fact-checked by Grok 2 weeks ago

Reconstruction filter

A reconstruction filter is a used in digital-to-analog conversion to reconstruct a continuous-time from a discrete-time sampled signal, by attenuating high-frequency spectral images and smoothing the stepwise output into a bandlimited that approximates the original continuous signal. The process relies on , where the filter's ensures that the reconstructed signal matches the input samples at sampling instants, effectively performing the inverse of sampling as described in the Nyquist-Shannon sampling theorem. The ideal reconstruction filter has a rectangular with a cutoff at the f_s/2 (where f_s is the sampling frequency) and a \text{sinc}(t/T) as its , where T = 1/f_s, enabling perfect recovery of bandlimited signals sampled above the without or distortion. In mathematical terms, the reconstructed signal y(t) is given by the y(t) = \sum_{n=-\infty}^{\infty} x \cdot g(t - nT), where x are the samples and g(t) is the filter satisfying g(nT) = \delta (the ). However, the infinite duration of the makes it impractical, leading to approximations like () filters for preservation or () filters such as Bessel designs for minimal phase distortion in applications requiring flat group delay. Reconstruction filters are essential in various engineering domains, including audio processing where they smooth DAC outputs to prevent audible artifacts from ultrasonic images (e.g., using seventh-order linear phase filters for over 120 dB stopband attenuation), image processing to interpolate pixel values and reduce aliasing in rasterization, and control systems for accurate signal recovery from sensor data. In computer graphics, they facilitate anti-aliasing by interpolating between discrete samples to generate continuous curves or surfaces, often employing spline-based methods for higher-order smoothness. Practical designs balance computational efficiency, stopband attenuation, and passband ripple to meet specific bandwidth requirements, such as 20 kHz cutoffs in high-fidelity audio systems.

Basic Principles

Definition and Role in Sampling

A reconstruction filter is a applied after digital-to-analog conversion to interpolate between samples and suppress high-frequency images, or spectral replicas, introduced during the sampling process. This process reconstructs a continuous-time signal from its discrete representation, ensuring that the output closely matches the original within the frequency range. In the sampling-reconstruction cycle, uniform sampling of a continuous-time signal produces a discrete-time whose frequency spectrum consists of the original spectrum repeated at multiples of the sampling , creating these unwanted high-frequency replicas. The reconstruction filter acts as an ideal with a cutoff at the (half the sampling rate), which isolates the component and attenuates the replicas, thereby preventing imaging artifacts and folding that would otherwise distort the reconstructed signal. This role is essential for faithful signal recovery, as outlined in the Nyquist-Shannon sampling theorem. The ideal reconstruction formula, known as the Whittaker-Shannon interpolation, expresses the continuous-time signal x(t) as an infinite sum over the discrete samples x: x(t) = \sum_{n=-\infty}^{\infty} x \cdot \text{sinc}\left( \frac{t - nT}{T} \right), where T is the sampling period and \text{sinc}(u) = \frac{\sin(\pi u)}{\pi u}. This sinc-based interpolation ensures perfect reconstruction for bandlimited signals sampled above the , with the filter's corresponding to the in the .

Mathematical Foundation

The Nyquist-Shannon sampling theorem provides the theoretical basis for reconstructing a continuous-time signal from its discrete samples, asserting that a bandlimited signal with bandwidth B (meaning its X(f) = 0 for |f| > B) can be perfectly recovered if sampled at a rate f_s > 2B, known as the . This condition ensures that the sampling process captures all necessary information without loss, as the theorem establishes a one-to-one correspondence between the continuous signal and its samples taken at intervals T = 1/f_s. The reconstruction is feasible only under this bandwidth constraint, preventing where higher frequencies masquerade as lower ones. In the , uniform sampling of a continuous-time signal produces a discrete-time signal whose consists of periodic repetitions of the original , scaled by the sampling period. The of the ideally sampled signal is expressed as X_s(f) = \frac{1}{T} \sum_{k=-\infty}^{\infty} X\left(f - k f_s\right), where the replicas are centered at multiples of the sampling f_s. To recover the original signal x(t), whose is confined to |f| < B < f_s/2, an ideal low-pass reconstruction filter is applied with frequency response H(f) = \begin{cases} T & |f| < f_s/2 \\ 0 & \text{otherwise}. \end{cases} This filter extracts the baseband copy while suppressing the overlapping replicas, normalizing the amplitude with the gain T to match the original spectrum. The condition B < f_s/2 guarantees separation of the spectral components, avoiding distortion from imaging or aliasing. The sinc function arises as the time-domain impulse response of this ideal brick-wall low-pass filter, derived from the inverse Fourier transform of H(f): h(t) = \int_{-f_s/2}^{f_s/2} T e^{j 2 \pi f t} \, df = T \cdot \frac{\sin(\pi f_s t)}{\pi f_s t} = \sinc\left( \frac{t}{T} \right), where the normalized sinc is \sinc(u) = \sin(\pi u)/(\pi u). Convolving the sampled impulse train \sum_{n=-\infty}^{\infty} x(nT) \delta(t - nT) with h(t) yields the Whittaker-Shannon interpolation formula for the reconstructed signal: x(t) = \sum_{n=-\infty}^{\infty} x(nT) \sinc\left( \frac{t - nT}{T} \right). This formula interpolates the samples using shifted and scaled sinc functions, which are orthogonal at the sampling instants and bandlimited to f_s/2. In non-ideal filters, transition bands around the cutoff introduce minor errors, but the mathematical ideal assumes infinite roll-off for exact recovery.

Design and Implementation

Ideal Reconstruction Filter

The ideal reconstruction filter is a theoretical low-pass filter that enables perfect reconstruction of a continuous-time bandlimited signal from its uniformly spaced discrete-time samples, provided the sampling rate satisfies the Nyquist-Shannon sampling theorem. This filter assumes the original signal is bandlimited to frequencies below half the sampling rate, ensuring no aliasing occurs during reconstruction. The impulse response of the ideal reconstruction filter is the normalized sinc function, given by h(t) = \sinc\left(\frac{t}{T}\right) = \frac{\sin(\pi t / T)}{\pi t / T}, where T = 1/f_s is the sampling period and f_s is the sampling frequency. The Fourier transform of this impulse response yields a rectangular frequency response: a flat passband with gain T from -f_s/2 to f_s/2 (the Nyquist frequency), and zero gain elsewhere. This design ensures a sharp cutoff at the Nyquist frequency, completely suppressing imaging artifacts above that point while preserving all signal content within the band. In the time domain, the ideal sinc-based interpolation contrasts with simpler methods like zero-order hold, which reconstructs the signal as piecewise constant and introduces distortion. The sinc function's infinite duration leads to oscillations or ringing near signal discontinuities, known as the Gibbs phenomenon, due to the abrupt frequency cutoff. Despite its theoretical perfection, the ideal reconstruction filter is unattainable in practice because it is non-causal, requiring knowledge of future samples due to the sinc function extending infinitely in both directions. Implementing it digitally would demand an infinite number of taps, making it computationally infeasible. Additionally, its sensitivity to quantization noise in finite-precision systems amplifies errors during reconstruction, as the infinite summation process integrates noise across all samples.

Practical Filter Approximations

Practical reconstruction filters approximate the ideal sinc function through finite-duration implementations, addressing the infinite extent and non-causality of the theoretical lowpass filter by truncation and modification techniques. In digital systems, finite impulse response (FIR) filters are commonly designed by truncating the sinc impulse response to a finite length M, which introduces Gibbs phenomenon—ripples in the frequency response due to abrupt truncation. To mitigate sidelobes and ringing, window functions are applied, yielding coefficients h = w \cdot \text{sinc}(n/M) for |n| \leq M/2, where w tapers the response smoothly to zero. Hamming windows reduce sidelobe levels to about -43 compared to the rectangular window's -13 , broadening the mainlobe and thus the transition bandwidth while preserving approximate symmetry in passband and stopband . Kaiser windows offer greater flexibility via a parameter \beta, allowing sidelobe attenuation A to be tuned—for instance, \beta = 0.1102(A - 8.7) for A > 50 —balancing reduction against increased filter length for sharper transitions. These methods enable causal, finite approximations suitable for real-time processing, though they compromise the ideal brick-wall response with finite transition bands. In analog domains, (IIR) filters approximate reconstruction using classical prototypes, selected based on desired frequency selectivity. Butterworth filters provide maximally flat response with no , offering a gradual of 20 per decade per , suitable for applications prioritizing linearity over sharp cutoffs. Chebyshev type I filters introduce equiripple deviation (e.g., 0.1 ) for steeper transitions—up to twice as sharp as Butterworth for the same —but exhibit monotonic stopband and nonlinear . Elliptic (Cauer) filters achieve the steepest with equiripple in both and stopband, minimizing for given and specs, though they introduce finite-frequency zeros that complicate . Filter is determined by specifications like edge, stopband edge, maximum , and minimum , often via tables or formulas. Digital efficiency in reconstruction is enhanced through polyphase structures, which decompose FIR interpolators into parallel subfilters operating at the input rate, reducing multiplications by the interpolation factor L—for example, halving computations in L=2 upsampling by filtering before zero insertion. Zero-stuffing complements this by inserting zeros in the frequency domain during design, enabling high-order FIRs (e.g., 1680 taps from a 168-tap prototype) for DACs without convergence issues in optimization algorithms, followed by inverse DFT and amplitude scaling. These techniques minimize hardware demands in multirate systems while maintaining reconstruction fidelity. Key trade-offs in practical designs involve balancing ripple (e.g., ≤0.1 ), stopband (e.g., ≥60 ), and transition bandwidth against computational cost, as narrower transitions or higher demand longer filters or higher orders, increasing latency and resource use. For instance, achieving 0.1 ripple and 60 attenuation in an FIR lowpass might require an order of 100 or more, elevating multiply-accumulate operations per sample. Analog realizations face similar choices, where elliptic filters minimize order but amplify ripple effects, while Butterworth avoids ripple at the expense of broader transitions. Optimal selection depends on system constraints, with windowed FIRs favoring digital versatility and classical IIRs suiting analog post-DAC smoothing.

Applications in Signal Processing

Digital-to-Analog Conversion

In digital-to-analog conversion (DAC) systems, the reconstruction filter plays a critical role in the pipeline by smoothing the stairstep output produced by the mechanism inherent in most DAC architectures. The retains the sample value constant between sampling instants, resulting in a constant that introduces high-frequency spectral images above the . The reconstruction filter, typically a low-pass analog filter, attenuates these images while interpolating the samples to approximate the original continuous-time signal, ensuring fidelity in applications like audio playback. Hardware implementations of reconstruction filters vary by complexity and integration needs. For simple audio DACs, op-amp-based active RC filters provide a cost-effective solution, offering a passband from 20 Hz to 20 kHz for standard CD-quality audio at 44.1 kHz sampling rates, with external components tuned for minimal phase distortion. In integrated circuits, switched-capacitor filters emulate resistors using capacitors and clocked switches, enabling on-chip realization without bulky passive elements and supporting compact designs in modern audio chips. Performance is evaluated through metrics such as total harmonic distortion plus noise (THD+N) and (SNR). In high-quality audio DACs operating at 44.1 kHz, THD+N can achieve levels below -88 dB at with proper filtering, while SNR often exceeds 100 dB (A-weighted) in the 20 Hz–20 kHz band, ensuring low audible artifacts. Challenges include , which modulates the sample timing and degrades SNR through , particularly at high frequencies, and imperfect filter roll-off leading to where images fold back into the . These issues are mitigated by employing external analog filters with sharp cutoffs to enhance image rejection beyond on-chip capabilities. Historically, early CD players in the relied on simple integrator-based analog filters with multi-bit DACs, but the shift to delta-sigma modulators oversampled the signal, relaxing analog filter demands and improving overall .

Oversampled Systems

In oversampled systems, signals are sampled at a rate exceeding the , where the oversampling ratio (OSR) is defined as OSR = f_s / (2 f_b), with f_s the oversampled sampling rate and f_b the signal . Common OSR values include 4 or 64, particularly in delta-sigma ADCs and DACs, where this approach facilitates higher internal processing rates while maintaining the signal of interest. This leads to a baseband of f_b = f_s / (2 \cdot OSR). A primary benefit of is the relaxation of reconstruction filter requirements, as spectral images of the signal are shifted to higher frequencies beginning at f_s / 2, widening the transition band and allowing lower-order analog or filters to achieve adequate without excessive or cost. For instance, doubling the sampling rate can reduce the required filter order from 10 poles to 5 or 6, simplifying in practical implementations. Additionally, in conjunction with noise-shaping techniques prevalent in delta-sigma architectures, quantization noise is redistributed to frequencies, enhancing in-band signal quality; alone provides a (SNR) gain of 3 per (doubling of the sampling rate), equivalent to one additional bit of resolution per factor-of-4 increase in OSR. Reconstruction in oversampled systems often incorporates to reduce the back to the baseband after digital processing, employing efficient structures such as cascaded -comb (CIC) filters to perform anti-imaging or . CIC filters, which consist of cascaded and comb stages, require no multiplications—only additions and subtractions—making them highly suitable for hardware implementation in high-rate environments like delta-sigma converters, where they minimize computational load and power consumption while providing lowpass characteristics. In applications such as high- audio, oversampling supports elevated rates like 192 kHz in DACs, further reducing analog filter complexity by leveraging digital prior to , thereby improving overall system and ease of .

Applications in Image and Video Processing

Pixel Interpolation

In image processing, reconstruction filters extend the one-dimensional principles of signal to two dimensions for handling discrete pixel grids. This involves convolving the input f[m,n] with a two-dimensional h(x,y) to produce a continuous or higher-resolution output g(x,y) = \sum_{m,n} f[m,n] \cdot h(x-m, y-n). To enhance computational efficiency, especially for large , separable filters are commonly employed, applying one-dimensional first along rows and then along columns, reducing the operation from O(N^4) to O(2N^3) for an N \times N and . Among practical implementations, uses a separable , which is a product of one-dimensional triangular functions, involving four neighboring pixels and resulting in a simple but often blurry output due to its low-order approximation. , in contrast, employs a 16-tap approximating the cubic via direct for efficiency (exact requires prefiltering to solve for coefficients), providing smoother results with better preservation of edges and details, though at higher computational cost. The cubic (Keys' approximation, a = -0.5) is defined as: \beta^3(d) = \begin{cases} 1.5 |d|^3 - 2.5 |d|^2 + 1 & 0 \leq |d| < 1 \\ -0.5 |d|^3 + 2.5 |d|^2 - 4 |d| + 2 & 1 \leq |d| < 2 \\ 0 & |d| \geq 2 \end{cases} For bicubic weights, the interpolation at position (x,y) sums contributions from the 4x4 neighborhood weighted by \beta^3(x-m) \cdot \beta^3(y-n), where m,n are integer pixel indices. From a frequency-domain , these reconstruction filters act as low-pass filters during magnification to suppress high-frequency components that could introduce artifacts in the upsampled . Bilinear filters tend to cause more blurring by overly attenuating mid-frequencies, while bicubic filters balance this with less ringing near edges, though both approximate the ideal imperfectly. Pixel via reconstruction filters finds widespread use in resizing applications, such as Photoshop's Image Size tool, which defaults to bicubic methods for high-quality upscaling or while minimizing artifacts. In , these techniques enable frame to increase , often using separable convolutions to synthesize intermediate frames from adjacent ones, as seen in adaptive models for smooth motion rendering.

Anti-Aliasing in Rendering

In rendering, reconstruction filters play a crucial role in by smoothing jagged edges and reducing moiré patterns that arise when discretizing a continuous scene into pixels during post-rasterization . This process reconstructs an approximation of the original continuous image from the discrete pixel grid, mitigating artifacts caused by high-frequency details like edges and textures. Key techniques for in rendering leverage reconstruction filters through and (MSAA). involves rendering the scene at a higher than the target output, followed by downsampling with a low-pass reconstruction filter to average subpixel samples and suppress high that cause . MSAA, an optimization of , takes multiple samples per during rasterization using coverage masks to determine fragment contributions, then applies a reconstruction filter during the resolve stage to blend samples into the final color, focusing efficiency on edge . Common filter choices include the box filter, which uniformly averages samples for simple implementation, and the , which applies a weighted to produce smoother transitions and better , often integrated via programmable shaders in modern GPUs. Despite their effectiveness, these methods face significant challenges, including high computational costs from increased sample counts and demands, particularly in real-time rendering pipelines like those in and , where MSAA requires specialized multisample render targets and resolve operations. Temporal , or "shimmering" in animations, emerges as scenes move, since static per-frame filtering fails to maintain consistency across frames, exacerbating artifacts in dynamic environments. The historical development of reconstruction filters in traces back to the 1980s with early rendering systems, where seminal work identified in shaded images and proposed prefiltering solutions for scan-line renderers. This evolved through the into hardware-accelerated techniques in graphics APIs, culminating in modern GPU implementations that hybridize ray-tracing with AI-assisted reconstruction, such as NVIDIA's DLSS (with DLSS 4 released in 2025 introducing Multi Frame Generation), which uses neural networks to upscale and denoise low-resolution ray-traced samples for efficient, high-quality .

Advanced Techniques

Wavelet-Based Reconstruction

Wavelet-based reconstruction extends traditional low-pass filtering by employing multiresolution analysis, where signals are decomposed into (low-pass) and detail (high-pass) subspaces using a two-channel . This decomposition allows for scalable representation across multiple resolution levels, capturing both global trends and local variations through successive and filtering. is achieved via the inverse , which upsamples the subband signals and applies synthesis filters to recombine them, ensuring the original signal is recovered without loss when perfect reconstruction conditions are met. A key requirement for perfect reconstruction in wavelet filter banks is the use of quadrature mirror filters (QMF), where the analysis filters H_0(z) and H_1(z) satisfy the condition H_0(z) H_1(-z) - H_1(z) H_0(-z) = 2z^{-l} for some delay l. This ensures alias cancellation and no amplitude distortion in the two-channel setup, with synthesis filters typically chosen as F_0(z) = H_1(-z) and F_1(z) = -H_0(-z) to achieve F_0(z) H_0(z) + F_1(z) H_1(z) = 2z^{-l}. The resulting structure forms an orthogonal or biorthogonal basis, enabling exact signal recovery. Prominent examples include the , a simple with filters H_0(z) = 1 + z^{-1} (low-pass) and H_1(z) = 1 - z^{-1} (high-pass), providing compact support and perfect through averaging and differencing operations. More advanced are Daubechies wavelets, which offer compact support of length $2N-1 and N vanishing moments for regularity N \geq 2, such as the N=2 case with coefficients \{h(0) = (1+\sqrt{3})/4\sqrt{2}, h(1) = (3+\sqrt{3})/4\sqrt{2}, h(2) = (3-\sqrt{3})/4\sqrt{2}, h(3) = (1-\sqrt{3})/4\sqrt{2}\}, ensuring and efficient approximation of smooth functions. These wavelets provide superior time-frequency localization compared to single-scale filters, making them efficient for non-stationary signals by isolating transients and edges through vanishing moments that annihilate polynomials up to degree N-1. In applications like , Daubechies-based transforms enable high-fidelity reconstruction with reduced artifacts, as seen in JPEG2000, where the 9/7-tap filter (derived from Daubechies) achieves better rate-distortion performance than DCT-based methods, supporting both lossy and lossless modes. The in a two-channel wavelet filter bank is given by \hat{x} = \sum_k \left( h \uparrow 2 * y_0[n-k] \right) + \left( g \uparrow 2 * y_1[n-k] \right), where y_0 and y_1 are the low-pass and high-pass subband signals, \uparrow 2 denotes by 2, and h, g are the low-pass and high-pass filters, respectively. This formula guarantees perfect recovery under the QMF conditions, with the upsampled convolution restoring the original sampling rate.

Multirate Filter Banks

In multirate , reduces the sampling rate of a signal by an factor M by retaining every M-th sample after lowpass ing to prevent , while increases the sampling rate by M by inserting M-1 zeros between samples followed by lowpass ing to remove . represents a H(z) as H(z) = \sum_{k=0}^{M-1} z^{-k} E_k(z^M), where E_k(z) are the polyphase components, enabling efficient implementation of and by shifting computations to lower rates and avoiding multiplications by zeros, thus achieving up to M-fold computational savings compared to direct . Multirate filter banks typically employ an , where an input signal is decomposed into subband signals using filters followed by downsamplers, processed independently, and then reconstructed via upsamplers and filters. (PR) in such banks requires two conditions: complete alias cancellation across subbands to eliminate distortions from downsampling, and a distortion-free overall (no or warping) between input and output. These conditions are analyzed and ensured using polyphase representations of the filters, which transform the multirate system into an equivalent single-rate polyphase matrix whose invertibility guarantees PR. Filter banks are designed as either maximally decimated, where the total downsampling rate equals the number of for critical sampling and minimal redundancy, or oversampled, where downsampling is less than the channel count to provide robustness against quantization errors and channel mismatches. A prominent example is the cosine-modulated (CMFB), where all analysis and synthesis filters are derived by cosine modulation of a single prototype , facilitating efficient polyphase implementation and near-perfect or exact through optimization of the prototype for alias cancellation and minimal distortion. In two-channel banks, a specific solution for alias cancellation uses filters F_0(z) = H_1(-z) and F_1(z) = -H_0(-z), where H_0(z) and H_1(z) are the analysis filters, though additional constraints are needed for full distortionless . Multirate filter banks find key applications in for , where early maximally decimated quadrature mirror filter (QMF) banks divided the into uniform subbands for differential encoding and quantization, serving as precursors to perceptual coders in standards like by exploiting psychoacoustic redundancies to achieve bit rates around 384 kbps with minimal audible distortion. In communications, transmultiplexers employ filter banks to multiplex multiple signals into a for transmission and demultiplex at the receiver, using designs to minimize and , as demonstrated in orthogonal configurations for digital telephony and satellite systems.

References

  1. [1]
    10.3: Signal Reconstruction - Engineering LibreTexts
    May 22, 2022 · Reconstruction, also known as interpolation, attempts to perform an opposite process that produces a continuous time signal coinciding with the points of the ...
  2. [2]
    Reconstruction Filter - an overview | ScienceDirect Topics
    Reconstruction filters are defined as tools used to recover a signal or its approximation from measurements by decomposing the data up to a desired scale, ...
  3. [3]
    [PDF] Reconstruction Filters in Computer Graphics
    A discrete signal can be converted to a continuous one by interpolating between samples, a process referred to in the signal-processing literature as ...
  4. [4]
    [PDF] Communication in the Presence of Noise* - MIT Fab Lab
    Shannon: Communication in the Presence of Noise ity than the message space. The type of mapping can be suggested by Fig. 3, where a line is mapped into a ...
  5. [5]
    Communication in the Presence of Noise | IEEE Journals & Magazine
    The paper develops a geometric method for communication systems, finding formulas for maximum transmission rate over noisy systems and properties of ideal ...
  6. [6]
  7. [7]
    Sampling Theory - Stanford CCRMA
    The sampling theorem provides that a properly bandlimited continuous-time signal can be sampled and reconstructed from its samples without error, in principle.<|control11|><|separator|>
  8. [8]
    10.4: Perfect Reconstruction - Engineering LibreTexts
    May 22, 2022 · This perfect reconstruction formula is known as the Whittaker-Shannon interpolation formula and is sometimes also called the cardinal series.Missing: paper | Show results with:paper
  9. [9]
    The Ideal Lowpass Filter
    An ideal lowpass may be characterized by a gain of 1 for all frequencies below some cut-off frequency $ f_c$ in Hz, and a gain of 0 for all higher frequencies.
  10. [10]
    [PDF] EE 424 #1: Sampling and Reconstruction
    Jan 13, 2011 · To derive the sampling theorem, we will choose f(t) to be the im- pulse train, defined in the following. Ideal lowpass filter.
  11. [11]
    [PDF] Sampling, Reconstruction, and Antialiasing - Computer Science
    These artifacts, known as the Gibbs phenomenon, are the overshoots and undershoots caused by reconstructing a signal with truncated frequency terms. The two ...
  12. [12]
    Theory of Ideal Bandlimited Interpolation - Stanford CCRMA
    A sinc function instance is translated to each signal sample and scaled by that sample, and the instances are all added together.
  13. [13]
    What is Ideal Reconstruction Filter? - Tutorials Point
    An ideal reconstruction filter is used to construct a smooth analog signal from a sampled signal. ... The bandwidth of the ideal reconstruction filter is taken ...
  14. [14]
    [PDF] 1. Sampling and Reconstruction 2. Quantization
    ▫ Ideal reconstruction filter is brickwall. ▫ i.e. sinc - not realizable ... between ideal & actual signal → noise. ▫ Resolution (# bits) affects ...
  15. [15]
    [PDF] FIR Filter Design by Windowing - MIT OpenCourseWare
    Although the rectangular windowing method provides the best mean-square approximation to a desired frequency response, just because it is optimal does not mean ...Missing: reconstruction | Show results with:reconstruction
  16. [16]
    [PDF] CHAPTER 8 ANALOG FILTERS
    For all-pole filters, the Chebyshev filter gives the best amplitude discrimination, followed by the Butterworth and then the Bessel. Page 24. BASIC LINEAR ...
  17. [17]
    The Polyphase Implementation of Interpolation Filters in Digital ...
    Dec 6, 2017 · This article discusses an efficient implementation of the interpolation filters called the polyphase implementation.
  18. [18]
    Designing very high-order FIR filters with zero-stuffing - Embedded
    Feb 1, 2011 · An example application of this filter design is when you're building a high-performance lowpass polyphase filter where the structures of the ...Missing: reconstruction DAC
  19. [19]
    Practical Introduction to Digital Filter Design - MathWorks
    Both the passband/stopband ripples and the transition width are undesirable but unavoidable deviations from the response of an ideal lowpass filter when ...Missing: reconstruction | Show results with:reconstruction
  20. [20]
    Digital-to-Analog Conversion
    The analog filter used to convert the zeroth-order hold signal, (c), into the reconstructed signal, (f), needs to do two things: (1) remove all frequencies ...
  21. [21]
    [PDF] Active-filtering circuit for audio DACs (Rev. A) - Texas Instruments
    Design Description. This circuit shows the implementation of a second-order active-filter for audio digital-to-analog converter (DAC) applications.
  22. [22]
    [PDF] ADC and DAC
    FIGURE 3-10. Switched capacitor filter operation. Switched capacitor filters use switches and capacitors to mimic resistors. As shown by the equivalent step ...
  23. [23]
    [PDF] DYNAMIC PERFORMANCE TESTING OF DIGITAL AUDIO D/A ...
    Dynamic performance of audio DACs includes THD+N, dynamic range, channel separation, and idle channel noise. Delta-sigma DACs use oversampling and noise ...
  24. [24]
    AN-837: DDS-Based Clock Jitter Performance vs. DAC ...
    The recommendation is to observe stop-band rejection out to several gigahertz, or at least out to 5 fC. Off-the-shelf filters such as SAW, crystal, ceramic, or ...
  25. [25]
    a fundamental introduction to the compact disc player - Duplication.ca
    Jul 22, 1998 · First generation players were characterized by multi-bit DAC's used with brickwall reconstruction filters. Second generation players used ...
  26. [26]
    [PDF] MT-017: Oversampling Interpolating DACs - Analog Devices
    The high oversampling rate moves the image frequencies higher, thereby allowing a less complex lower cost filter with a wider transition band. In addition, ...<|control11|><|separator|>
  27. [27]
    Increase Dynamic Range of SAR ADCs Using Oversampling
    Jun 1, 2015 · As a general guideline, oversampling the ADC by a factor of four provides one additional bit of resolution, or a 6 dB increase in dynamic range.
  28. [28]
    [PDF] A Beginner's Guide To Cascaded Integrator-Comb (CIC) Filters
    CIC filters are well-suited for anti-aliasing filtering prior to decimation (sample rate reduction), as shown in Figure 1(a); and for anti-imaging filtering for.
  29. [29]
    [PDF] 24-Bit 192kHz Sampling Advanced Segment Audio Stereo DAC ...
    Table 1 shows examples of system clock frequencies for common audio sampling rates. If the oversampling rate of the delta-sigma modulator is selected as. 128 fS ...
  30. [30]
    [PDF] Cubic Convolution Interpolation for Digital Image Processing - Ncorr
    After the interpolation coefficients have been computed, cubic spline interpolation involves roughly the same amount of work as the cubic convolution method.
  31. [31]
    [PDF] Linear Image Processing and Filtering - Stanford University
    Separable linear image processing (cont.) ▫ If the digital input and output images are written as a matrices f and g, we can conveniently write.
  32. [32]
    [PDF] 834 - B-Spline Signal Processing: Part II-Efficient Design
    We recall that the cubic B-spline coefficients that pro- vide an exact signal interpolation are obtained by convo- lution with the direct cubic spline ...Missing: bicubic | Show results with:bicubic
  33. [33]
    [PDF] Spatial and Frequency Domain Comparison of Interpolation ...
    Nov 30, 2009 · Abstract:This article presents a comparison of different interpolation techniques both in the spatial and frequency domains.
  34. [34]
    How to resize an image in Photoshop in 5 steps - Adobe
    Choose an interpolation method to specify how data is resampled in your image. It's often best to go with automatic and let Photoshop choose for you, but ...
  35. [35]
    Video Frame Interpolation via Adaptive Separable Convolution - arXiv
    Aug 5, 2017 · This paper formulates frame interpolation as local separable convolution over input frames using pairs of 1D kernels.
  36. [36]
    [PDF] Anti-Aliasing & Super-Sampling - Computer Graphics
    Analytic low-pass filtering. – Ideally eliminates aliasing completely. – Hard to implement. • Weighted or unweighted area sampling.
  37. [37]
    [PDF] Subpixel Reconstruction Antialiasing for Deferred Shading
    Deferred lighting and multisample antialiasing (MSAA) are powerful techniques for real-time rendering that both work by sep- arating the computation of the ...
  38. [38]
    [PDF] Sampling and Antialiasing - CS@Cornell
    • Gaussian filter. – Very smooth antialiasing filter. • B-spline cubic ... box reconstruction filter bicubic reconstruction filter. [P hilip G reens pun].
  39. [39]
    Rasterization Rules - Win32 apps | Microsoft Learn
    Aug 19, 2020 · Multisample Anti-Aliasing Rasterization Rules. Multisample antialiasing (MSAA) reduces geometry aliasing using pixel coverage and depth ...
  40. [40]
    [PDF] Adaptive Temporal Antialiasing - Research at NVIDIA
    Aug 12, 2018 · Aliasing of primary visible surfaces is one of the most fundamental and challenging limitations of computer graphics. Almost all ren- dering ...
  41. [41]
    [PDF] The Aliasing Problem in Computer- Generated Shaded Images
    This paper explains the observed defects in terms of the aliasing phenomenon inherent in sampled signals and discusses prerdtering as a recognized cure. A ...Missing: 1978 | Show results with:1978
  42. [42]
    Decoding AI-Powered DLSS 3.5 Ray Reconstruction | NVIDIA Blog
    Apr 24, 2024 · Neural renderer improves ray-traced features like reflections, global illumination, and shadows to create a more immersive, realistic gaming ...
  43. [43]
    [PDF] Course 18.327 and 1.130 Wavelets and Filter Banks
    Simplest (non-trivial) example of a two channel FIR perfect reconstruction filter bank. ... Example: Haar filter bank. H0(z) = (1 + z -1). H1(z) = (1 œ z -1).
  44. [44]
    [PDF] Orthonormal bases of compactly supported wavelets
    INGRID DAUBECHIES. A T&T Bell Laboratories. Abstract. We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regular- ity. The ...
  45. [45]
    [PDF] JPEG2000: Wavelet Based Image Compression - Helmut Knaust
    Two primary reasons for JPEG2000s superior performance are the wavelet transform and embedded block coding with optimal truncation (EBCOT). JPEG2000 provides a ...
  46. [46]
    Multirate Filter Banks | Spectral Audio Signal Processing
    Mar 2, 2011 · Type II Polyphase Decomposition. The polyphase decomposition of $ H(z)$ into $ N$ channels in (11.11) may be termed a ``type I ...
  47. [47]
    Perfect Reconstruction Filter Banks - an overview - ScienceDirect.com
    A perfect reconstruction filter bank is defined as a filter bank that reconstructs the output signal as a pure delayed version of the input signal, ...
  48. [48]
  49. [49]