Fact-checked by Grok 2 weeks ago

Sample-rate conversion

Sample-rate conversion, also known as resampling or sampling-frequency conversion, is the process of changing the sampling rate of a discrete-time signal from an original rate F = 1/T to a new rate F' = 1/T', where T and T' are the respective sampling periods. This operation is fundamental in (DSP) and typically involves to increase the sampling rate or to decrease it, often combined in rational ratios L/M (where L and M are integers) to achieve efficient conversion while preserving signal integrity. To prevent distortion from during decimation or during interpolation, low-pass filtering is essential, ensuring the signal's frequency content remains within the Nyquist limits of the target rate. The core techniques for sample-rate conversion rely on multirate structures, such as polyphase filters and multistage networks, which optimize computational efficiency by reducing the number of operations compared to single-stage implementations. For instance, in rational conversion by [L/M](/page/L&M), the process inserts L-1 zeros between samples for , applies an anti-imaging , applies an anti-aliasing , and then retains every Mth sample (discarding the intervening samples) to avoid , with polyphase decomposition minimizing redundant computations. (FIR) filters are commonly used for their properties, though (IIR) filters can offer further efficiency in specific cases. These methods enable high-quality conversion even for irrational ratios through adaptive or arbitrary resampling algorithms. Sample-rate conversion plays a critical role in numerous applications, including processing for format compatibility (e.g., converting between 44.1 kHz and 48 kHz rates), for channel rate adaptation, and in visual systems. In analog-to-digital (A/D) conversion and , it facilitates bandwidth-efficient signal handling by downsampling after and upsampling for transmission, reducing hardware demands and power consumption. Its importance has grown with the proliferation of multirate systems in , , and , where precise rate adjustments enhance performance without excessive computational overhead.

Fundamentals

Definition and Motivation

Sample-rate conversion is the process of changing the sampling rate of a discrete-time signal to obtain a new discrete-time representation of the underlying continuous-time signal, while preserving as much of the original information as possible. This technique is fundamental in (DSP), where signals are represented as sequences of samples taken at regular intervals, and altering the rate allows adaptation to different system requirements without significant . The primary motivation for sample-rate conversion stems from the need for compatibility across diverse digital systems that operate at varying sampling frequencies. For instance, consumer audio content recorded at 44.1 kHz for compact discs must often be converted to 48 kHz for professional broadcasting or workflows to ensure seamless integration. Additionally, it enables bandwidth efficiency by downsampling signals for storage or transmission over limited channels, reducing data volume while maintaining perceptual quality, and to match hardware constraints or enhance processing resolution in applications like . These conversions are essential in multirate systems where mismatched rates could otherwise lead to inefficiencies or errors. Historically, sample-rate conversion emerged in the 1970s alongside the rise of , driven by the development of early and standards that required handling multiple sampling rates. Pioneering work in this area, such as the foundational analyses of and techniques, laid the groundwork for efficient multirate architectures in the late 1970s and early 1980s. At a high level, the process involves to increase the sampling rate by inserting new samples, or to decrease it by selectively removing samples, invariably accompanied by low-pass filtering to mitigate during downsampling or during . This aligns with the Nyquist-Shannon sampling theorem, which establishes the minimum rate needed to faithfully represent a signal's .

Nyquist-Shannon Theorem and Aliasing Risks

The Nyquist-Shannon sampling theorem establishes the fundamental limit for accurately capturing a continuous-time signal in the . It states that a bandlimited continuous-time signal with maximum component B (in hertz) can be perfectly reconstructed from its uniformly spaced samples if the sampling f_s satisfies f_s \geq 2B. This condition ensures that the discrete-time representation contains all necessary information to recover the original analog signal without loss, as derived from the theory of bandlimited functions and . The threshold $2B represents the minimum sampling rate required, known as the , beyond which higher frequencies cannot be distinguished from lower ones in the sampled sequence. A key consequence of violating this theorem is , a where components above the f_s/2—the highest representable without overlap—fold back into the lower band, masquerading as false lower- signals. This phenomenon arises because sampling creates periodic replicas of the signal's centered at multiples of f_s, leading to overlap if the signal is not properly bandlimited. The aliased f_\text{alias} for an original f > f_s/2 is given by f_\text{alias} = \left| f - k f_s \right|, where k is the that minimizes the , typically mapping f_\text{alias} into the range [0, f_s/2). The Nyquist frequency f_s/2 thus defines the critical bandwidth: any signal energy exceeding this limit risks irreversible distortion upon sampling or processing. In the context of sample-rate conversion, these principles dictate necessary precautions to preserve . Downsampling, which reduces the sampling rate, amplifies risks as the effective decreases, potentially causing high-frequency components to fold into the audible or relevant band; low-pass filtering below the new f_s/2 is to attenuate such components to . Conversely, upsampling increases the sampling rate and thereby raises the , avoiding the introduction of new aliased artifacts from existing signal content, though it may generate artifacts—spectral replicas at higher frequencies—that require separate filtering to suppress. Adhering to the ensures that rate changes maintain the signal's within its original B.

Core Techniques

Upsampling

Upsampling increases the sampling rate of a discrete-time signal by an L, typically by inserting L-1 zero-valued samples between each original sample, which effectively multiplies the original sampling rate f_s by L. This process, known as or zero-stuffing, prepares the signal for while avoiding the introduction of new information beyond the original . For an input signal x, the upsampled signal y is generated such that the original samples are preserved at multiples of L, with zeros inserted elsewhere: y = \begin{cases} x\left[\frac{m}{L}\right] & \text{if } m \mod L = 0 \\ 0 & \text{otherwise} \end{cases} This operation compresses the spectrum of the original signal in the , repeating it L times across the new Nyquist L f_s / 2. The zero-insertion creates unwanted spectral images—replicas of the centered at integer multiples of the original f_s—which can distort the signal if left unaddressed. To mitigate these imaging artifacts, an anti-imaging is applied immediately after , with its set at the original f_s / 2 to retain only the desired while attenuating the images. In practice, upsampling is often used in audio processing to convert low-rate signals, such as telephone-quality audio at 8 kHz, to higher rates like 44.1 kHz for improved in digital systems without exceeding the original frequency content.

Downsampling

Downsampling, also known as , reduces the sampling rate of a discrete-time signal by an factor M > 1, effectively dividing the original rate f_s by M. This process involves applying a low-pass to the input signal x to produce a filtered version z, followed by discarding M-1 out of every M samples to yield the output y = z[m M]. The filtering step is essential to prevent spectral that would otherwise distort the signal. The operation can be expressed mathematically as y = z[m M], where z is the low-pass filtered version of x with a of \pi / M radians per sample. In the , unfiltered decimation compresses the by M and replicates it M times, causing high-frequency components to fold into the as aliases. The attenuates frequencies above the new Nyquist limit f_s / (2M), ensuring the output remains undistorted within |\omega| < \pi / M. Ideally, this filter has unit gain in the . Without the anti-aliasing filter, energy from frequencies exceeding \pi / M aliases into the lower band, potentially introducing audible artifacts or data loss in applications like audio processing. For instance, downsampling 48 kHz audio—typical for —to 8 kHz standards (a factor of M = 6) requires filtering to attenuate components above 4 kHz, enabling bandwidth savings while preserving speech intelligibility. This technique, foundational to multirate , contrasts with by addressing rather than imaging, though the two operations are inverses in ideal bandlimited scenarios.

Rational-Factor Resampling

Rational-factor resampling refers to the process of converting a digital signal's sampling rate by a rational L/M, where L and M are coprime positive integers, resulting in a new sampling rate that is L/M times the original. This method integrates by the integer factor L and downsampling by the integer factor M to achieve arbitrary rational rate changes without requiring irrational computations. The process begins with upsampling, where L-1 zeros are inserted between each original sample to increase the rate by L, followed by lowpass filtering to suppress the spectral images introduced by zero insertion. This is then followed by downsampling, which involves lowpass filtering to prevent and decimation by retaining every M-th sample. The combined lowpass filter for the overall system has a cutoff frequency of \min(\pi/L, \pi/M) in the normalized frequency scale at the intermediate sampling rate, corresponding physically to \min(f_s/2, f_s'/2), where f_s is the original sampling rate and f_s' is the new rate. The output signal y is thus obtained via bandlimited , approximated as y \approx \sum_k x \, h\left( \frac{n L - k M}{M} \right), where x are the input samples and h(\cdot) is the impulse response of the ideal lowpass filter designed for the rational conversion. Efficiency in rational-factor resampling stems from implementations that compute only the necessary output samples directly, bypassing the storage and processing of the full intermediate signal at rate L f_s, which would otherwise multiply computational demands by L. For example, converting audio from 44.1 kHz to 48 kHz employs L=160 and M=147, as $44.1 \times 160 = 48 \times 147 = 7056 Hz in a common multiple, enabling exact rational conversion with reduced operations compared to naive upsampling to an excessively high rate. A key challenge arises when the desired rate ratio is , such as the exact $44.1/48 \approx 0.91875, necessitating a close rational approximation like $147/160 to minimize ; the approximation error decreases with higher L and M, but increases filter and computation.

Advanced Algorithms

Interpolation Methods

methods are essential for estimating intermediate sample values when increasing the sampling rate in sample-rate conversion, enabling the reconstruction of a continuous-time signal from discrete samples before resampling at the higher rate. These methods vary in complexity and accuracy, balancing computational demands with the fidelity of the reconstructed signal. The choice depends on the application, with simpler techniques suiting constraints and more advanced ones prioritizing quality in offline processing. The simplest approach is , also known as , where each new sample is assigned the value of the closest original sample. Mathematically, for an output sample index n, the time position t = n \cdot (f_s^{\text{new}} / f_s^{\text{old}}) is computed, and the output is y = x[\round(t)], with x[\cdot] denoting the input samples. This method incurs zero computational cost beyond the ratio calculation, making it ideal for resource-limited systems. However, it suffers from high and severe waveform distortion due to the lack of smoothing between samples. A step up in sophistication is , which connects adjacent original samples with straight lines to estimate new values. The formula is y = x[\floor(t)] + (t - \floor(t)) \cdot (x[\ceil(t)] - x[\floor(t)]), where \floor(\cdot) and \ceil(\cdot) are the , respectively. This requires only a few arithmetic operations per sample, offering low complexity suitable for many embedded applications. While it provides smoother transitions than nearest-neighbor, linear interpolation introduces phase distortion and attenuates higher frequencies, leading to reduced fidelity for bandlimited signals. The theoretically ideal method is sinc interpolation, derived from the Nyquist-Shannon sampling theorem, which enables perfect of a bandlimited signal. The continuous-time reconstruction is given by y(t) = \sum_{k=-\infty}^{\infty} x \cdot \sinc\left( \frac{t - k T_s}{T_s} \right), where \sinc(u) = \sin(\pi u)/(\pi u) is the normalized and T_s = 1/f_s^{\text{old}} is the original sampling period. Discrete samples at the new rate are obtained by evaluating this at the corresponding times. This approach eliminates and for signals below the but demands infinite computation due to the sinc function's infinite extent, rendering it impractical without truncation and windowing approximations. In practice, these methods trade off against reconstruction fidelity. Nearest-neighbor offers negligible overhead but poor quality with prominent artifacts, while improves smoothness at modest cost yet compromises on spectral accuracy. Sinc interpolation sets the fidelity benchmark, serving as the basis for practical (FIR) filters that approximate its ideal response through truncation, though at significantly higher complexity. Seminal analyses highlight that optimal designs favor sinc-based approximations for high-quality applications, such as , where must be minimized.

Polyphase Filter Structures

Polyphase filter structures exploit the mathematical properties of multirate systems to implement sample-rate conversion with significantly reduced . In polyphase decomposition, a prototype filter h is partitioned into L sub-filters (for by integer factor L) or M sub-filters (for downsampling by integer factor M), where each sub-filter operates at the original sampling rate. This breakdown transforms the full-rate filtering operation into parallel branches, each handling a decimated version of the input signal. The representation of the filter is expressed as H(z) = \sum_{k=0}^{L-1} z^{-k} E_k(z^L), where E_k(z) are the polyphase components given by E_k(z) = \sum_{n} h[nL + k] z^{-n}, which rearranges into contributions from the polyphase branches, enabling efficient computation without explicitly inserting zeros. A key enabler of this is the noble , which permit commuting the with the -change operator under certain conditions, such as when the is expressed in polyphase form. For , the allows the polyphase sub- to precede the upsampler, performing operations at the lower input ; similarly, for downsampling, sub- follow the downsampler. This commutativity ensures that filtering occurs at the lower sampling , avoiding unnecessary computations on zero-valued samples in or redundant processing before . The resulting structure reduces complexity from O(N) operations per output sample in direct (where N is the length) to approximately O(N/L), achieving a of roughly L while maintaining the same . Polyphase implementations are particularly advantageous for long , as they scale linearly with length but benefit multiplicatively from the . For rational resampling by factor L/M, where L and M are coprime integers, the polyphase structure combines an interpolator followed by a decimator into a single time-multiplexed architecture using a commutator. The input signal is fed into \max(L, M) polyphase branches of the anti-imaging/anti-aliasing filter, with a commutator selecting outputs at the desired rate, effectively interleaving the sub-filter responses. This unified design minimizes intermediate sample rates and storage, making it suitable for hardware-constrained environments. For instance, in real-time audio sample-rate conversion, a polyphase sinc filter—derived from the ideal low-pass prototype—reduces multiplications by a factor approximately equal to L or M, enabling low-latency processing for conversions like 44.1 kHz to 48 kHz without perceptible artifacts. Modern variants extend polyphase structures to adaptive scenarios, particularly in software-defined radio (SDR), where variable or time-varying rates are common. Adaptive polyphase filters dynamically select or interpolate filter phases based on the instantaneous rate ratio, using a fractional delay mechanism to handle non-integer shifts. This approach supports seamless rate adjustments for diverse schemes and conditions, with complexity remaining proportional to the filter length rather than the rate variation. Such implementations have been demonstrated to achieve high efficiency in SDR terminals, balancing quality and resource use for applications like multi-standard wireless communication.

Applications

Audio Systems

In audio systems, sample-rate conversion is essential for compatibility across diverse standards and devices. The (CD) format employs a sample rate of 44.1 kHz, while and typically use 48 kHz, and often utilizes 96 kHz to capture extended frequency ranges. A common conversion in mixing workflows involves resampling from 44.1 kHz to 48 kHz to align and professional formats during . Digital audio workstations (DAWs) frequently apply sample-rate conversion during export, often combined with dithering to minimize quantization noise when reducing bit depth alongside rate changes. Streaming services perform conversions to adapt high-resolution source material to device-specific playback rates, ensuring seamless delivery across varied hardware. In vinyl-to-digital transfers, analog signals are digitized at rates like 48 kHz or higher, with subsequent conversion to standard rates such as 44.1 kHz for archiving or distribution. A key application is sample-rate conversion during encoding, where lowering the rate from 48 kHz to 44.1 kHz can help reduce bitrate demands while preserving perceptual quality through efficient compression. Asynchronous sample-rate conversion (ASRC) addresses in playback devices, dynamically adjusting rates between mismatched clocks in sources and receivers to prevent buffer overflows or underruns. The () provides guidelines emphasizing 48 kHz as a preferred rate for professional interchange to limit in audio chains, recommending high-quality converters that maintain during rate changes. Rational resampling techniques are commonly employed for these non-integer rate ratios in audio systems.

Video and Multimedia

Sample-rate conversion in video and involves adjusting frame rates and pixel rates to accommodate diverse standards across , broadcast , and digital platforms. Traditional is captured at 24 frames per second (fps), while broadcast standards vary: NTSC regions use approximately 29.97 or 59.94 fps for interlaced or video, PAL regions employ 25 or 50 fps, and (HDTV) often requires conversions such as from 24 fps to 60 fps to ensure compatibility with displays. These adjustments prevent temporal artifacts and maintain visual fluidity during distribution. Temporal resampling addresses frame rate changes by interpolating or decimating frames to match target rates, while spatial resampling handles pixel rate adjustments during video resizing, such as resolutions from standard-definition to high-definition formats. Motion-compensated enhances these processes by estimating object motion across frames to generate ones, reducing judder in conversions like pulldown sequences. For instance, in slow-motion effects, via frame inserts additional frames to extend playback duration without introducing artifacts such as judder. A prominent example is the pulldown technique, which converts 24 to 29.97 video by repeating fields in a pattern over five fields, ensuring smooth transfer while preserving motion integrity in -to-video workflows. In codecs like H.264/AVC, sample-rate conversion supports adaptive streaming by enabling adjustments during encoding, allowing content to adapt to varying bandwidths and playback devices while maintaining synchronization. Multimedia integration, such as in Blu-ray authoring, demands simultaneous sample-rate conversion for audio and video to uphold lip-sync, where video (e.g., 24 ) must align with audio sampling rates (e.g., 48 kHz) through precise temporal processing to avoid drift in fractional-rate environments. This ensures seamless playback across hybrid media, with standards emphasizing minimal latency in conversion to preserve perceptual quality.

Performance Considerations

Artifacts and Quality Metrics

Sample-rate conversion can introduce various artifacts that degrade the fidelity of the reconstructed signal, primarily due to imperfect filtering or approximation of the ideal reconstruction process. manifests as unwanted "ghost frequencies" that fold into the when downsampling without adequate low-pass filtering, violating the and creating audible or visible distortions in audio and video signals. occurs during as high-frequency echoes or replicas of the original spectrum appear above the new , often resulting from insufficient anti-imaging filters that fail to attenuate these spectral images. arises from poor filtering implementations, introducing timing irregularities that manifest as or modulation artifacts, particularly in systems where filter delays vary. Phase distortion is common in linear-phase methods, where group delay variations across frequencies lead to temporal smearing, especially noticeable in transient signals like percussive audio. To quantify the quality of sample-rate conversion, several metrics are employed to assess deviations from an ideal reconstruction, often using sinc interpolation as a reference benchmark that theoretically minimizes such artifacts. The signal-to-noise ratio (SNR) measures the power ratio of the desired signal to the noise introduced by conversion errors, calculated as: \text{SNR} = 10 \log_{10} \left( \frac{P_{\text{signal}}}{P_{\text{noise}}} \right) where P_{\text{noise}} encompasses quantization, aliasing, and imaging contributions post-conversion. Mean squared error (MSE) evaluates the average squared difference between the converted signal and an ideal bandlimited reconstruction, providing a simple objective measure of overall distortion. For audio applications, perceptual evaluation models such as PEAQ (Perceptual Evaluation of Audio Quality) incorporate human auditory models to predict subjective quality, accounting for masking effects and frequency selectivity beyond raw error metrics. Evaluation of conversion quality often emphasizes frequency-domain characteristics, with high-quality systems requiring a flat frequency response and minimal passband ripple, typically less than 0.1 dB, to preserve spectral integrity without introducing coloration or attenuation variations. These metrics collectively ensure that artifacts remain below perceptible thresholds, with SNR values exceeding 90 dB considered professional-grade for critical listening environments.

Optimization and Hardware Implementation

Software implementations of sample-rate conversion (SRC) prioritize computational efficiency and audio fidelity, with libraries such as libsamplerate, also known as Secret Rabbit Code, providing high-quality conversion for arbitrary and time-varying ratios using polyphase filtering techniques. Developed by Erik de Castro Lopo, this open-source library supports multiple conversion qualities, from for low CPU usage to sinc-based methods for near-theoretical performance, making it suitable for real-time audio processing in applications like music production software. For handling arbitrary ratios beyond rational factors, FFT-based methods offer a frequency-domain approach that resamples signals by modifying the spectral content of a large buffer before inverse transformation. These "giant FFT" techniques, as detailed in works by and , enable efficient non-integer conversions with reduced through phase-adjusted spectral , achieving up to 10-20 times faster processing than time-domain equivalents for . Such methods are integrated into libraries like and FFmpeg for of files. In hardware, asynchronous sample-rate conversion (ASRC) chips are widely used in digital-to-analog converters (DACs) to match disparate input and output clock rates in , preventing buffer overflows or underruns in systems like interfaces. Devices such as the CS8420 employ polyphase filters to achieve this synchronization with minimal , supporting input rates from 8 kHz to 108 kHz while maintaining . Similarly, FPGA-based implementations leverage polyphase structures for low-latency , as seen in Intona's IP cores, which utilize reconfigurable logic to process up to 230 kHz audio with under 1 ms delay and reduced resource utilization compared to software equivalents. Optimizations for dynamic environments include variable-rate SRC algorithms that adapt to fluctuating input rates, essential for adaptive streaming in network audio systems where bandwidth varies. The XMOS SRC library, for instance, supports asynchronous modes that track in real-time using phase-locked loops, enabling seamless playback of variable-rate sources like without audible glitches. Hybrid analog-digital approaches further enhance performance by combining digital resampling with analog filters in DAC pipelines, minimizing distortion in high-fidelity playback; Media's DAC2 exemplifies this by integrating post-digital analog processing to achieve below -120 dB. As of 2025, recent advances incorporate AI-assisted to optimize for neural audio , where multirate is critical for sample-rate-independent recurrent neural networks (RNNs). by Carson et al. demonstrates two-stage resampling filters—combining half-band IIR and Kaiser-window designs—trained via neural optimization to reduce computational overhead in audio effect RNNs, enabling at varying rates with up to 30% lower in generated signals. These methods, published in IEEE Transactions on Audio, Speech, and Language Processing, facilitate efficient integration of in AI-driven tools for music generation and .

References

  1. [1]
    Request Rejected
    Insufficient relevant content.
  2. [2]
    Understanding Digital Signal Processing, Second Edition - O'Reilly
    The useful, and fascinating, process of sample rate conversion is a scheme for changing the effective sampling rate of a discrete signal after the signal ...
  3. [3]
    Multirate DSP and Its Application in A/D Conversion - All About Circuits
    Jun 21, 2017 · Multirate DSP uses different sampling rates in a system, leading to more efficient A/D conversion by reducing the number of bits used.
  4. [4]
    Multi-Rate Processing and Sample Rate Conversion: A Tutorial
    Feb 22, 2002 · Sample rate conversion, on the other hand, is employed when resampling a signal at a lower rate in order to allow it to pass through a channel ...
  5. [5]
    [PDF] Sample Rate Conversion in Digital Signal Processors
    May 26, 2014 · The simultaneous usage of different sample rates within digital systems has become an important topic in digital signal processing.
  6. [6]
    Digital Audio: Part 12 - Sampling Rate Conversion
    Jun 9, 2021 · Rate conversion will be required for about three dominant reasons. There is not a single digital audio sampling rate and recordings exist made at a great ...
  7. [7]
    The Roots of DSP
    The roots of DSP are in the 1960s and 1970s when digital computers first became available. Computers were expensive during this era, and DSP was limited to only ...
  8. [8]
    Communication in the Presence of Noise | IEEE Journals & Magazine
    Communication in the Presence of Noise. Abstract: A method is developed for representing any communication system geometrically. Messages and the corresponding ...Missing: URL | Show results with:URL
  9. [9]
    [PDF] Upsampling and Downsampling - Stanford CCRMA
    Jun 2, 2020 · Upsampling inserts zeros between samples, while downsampling takes every Nth sample. Upsampling is related to spectral copies, and downsampling ...<|control11|><|separator|>
  10. [10]
    [PDF] MULTIRATE SIGNAL PROCESSING
    For an upsampling by a factor of I, add I-1 zeros between samples in the original sequence. • An upsampling by a factor I is commonly written I.
  11. [11]
    Upsampling and Downsampling | Spectral Audio Signal Processing
    Upsampling inserts zeros between samples, while downsampling selects every Nth sample, discarding the rest.
  12. [12]
    Signal Upsampling and Imaging Artifacts - MATLAB & Simulink
    This example shows how to upsample a signal and how upsampling can result in images. Upsampling a signal contracts the spectrum.
  13. [13]
    Design of Decimators and Interpolators - MATLAB & Simulink
    When upsampling by a rate of N , apply a lowpass filter after upsampling, this filter is known as an anti-imaging filter. ... upsampling filter firInterp = dsp.
  14. [14]
    Sonar: Upsampling Plug-ins - Sound On Sound
    A digital system is only capable of accurately representing audio at frequencies lower than half the sampling rate: 22.05kHz, in a 44.1kHz project. If an ...
  15. [15]
    [PDF] Digital Audio Resampling Home Page - Stanford CCRMA
    In these techniques, the signal is first interpolated by an integer factor L and then decimated by an integer factor M. This provides sampling-rate conversion ...
  16. [16]
    [PDF] Variable Sample Rate Conversion Techniques for the Advanced ...
    Feb 15, 2007 · In what follows, we will review the most commonly used interpolation kernels as well as their relation to the underlying sinc kernel that serves ...Missing: seminal paper
  17. [17]
    [PDF] Efficient arbitrary sampling rate conversion with recursive calculation ...
    In this paper, we presented a novel algorithm for arbitrary ratio sampling rate conversion. The algorithm is based on dis- crete-time simulation of a ...
  18. [18]
    [PDF] Polyphase Decomposition
    Ideal Sampling Rate. Converter. • In principle, a sampling rate conversion by an arbitrary conversion factor can be implemented as follows. • The input digital ...
  19. [19]
  20. [20]
    Multirate Noble Identities | Spectral Audio Signal Processing
    Figure 11.13 illustrates the so-called noble identities for commuting downsamplers/upsamplers with sparse transfer functions that can be expressed a function ...
  21. [21]
    Sample Rate Conversion — SOF Project 2.11.0 documentation
    The purpose of polyphase optimization is to move the processing operations to the lowest sample rate possible and omit computing of intermediate results that ...
  22. [22]
    [PDF] Sample Rate Conversion for Software Radio - Vodafone Chair
    This is done by appropriately selecting the respective polyphase defined by the intersample position m m. It is also a simple matter to extend the ...
  23. [23]
  24. [24]
    Q. Should I apply dither after sample-rate conversion?
    It is necessary to apply dither so that low-level information carried in bits below the truncation level is retained in the noise floor.
  25. [25]
    How Do I Convert Vinyl LPs to Hi-Res Audio? - Sound & Vision
    Feb 9, 2017 · The easiest, and least expensive, option is to buy a separate analog-to-digital audio converter (ADC) or phono preamp with built-in analog-to-digital ...
  26. [26]
    MP3 Export Options - Audacity Manual
    MP3 Export Options let you choose the quality of files exported to the popular MP3 format. You can. set the Sample Rate for the export ...
  27. [27]
    [PDF] 192-kHz STEREO ASYNCHRONOUS SAMPLE-RATE CONVERTER
    The SRC4190 is an asynchronous sample rate converter designed for professional and broadcast audio applications. The SRC4190 combines a wide input-to-output ...Missing: drift playback
  28. [28]
    [PDF] Preferred sampling frequencies for applications employing pulse ...
    A sampling frequency of 48 kHz is recommended for the origination, processing, and interchange of audio programs employing pulse-code modulation. Recognition is ...Missing: guidelines | Show results with:guidelines
  29. [29]
    Motion-Compensated Frame Rate Up-Conversion—Part I: Fast Multi ...
    Apr 5, 2010 · Abstract: Motion-compensated frame rate up-conversion is used to convert video/film materials of low frame rates to a higher frame rate so ...
  30. [30]
  31. [31]
    [PDF] UHDTV Ecosystem Study Group Report - SMPTE
    Mar 28, 2014 · In particular, audio sampled at typical rates will never precisely map to video frames at 1/1.001 fractional frame rates. Fractional frame rates ...Missing: authoring | Show results with:authoring
  32. [32]
    Secret Rabbit Code (aka libsamplerate) - mega-nerd.com
    Secret Rabbit Code (aka libsamplerate) is a Sample Rate Converter for audio. · is capable of arbitrary and time varying conversions ; from downsampling by a ...
  33. [33]
    [PDF] Giant FFTs for Sample Rate Conversion
    This paper describes a high-quality SRC method, which uses the fast Fourier transform (FFT) and inverse FFT (IFFT) algorithms to scale the spectrum of the audio ...
  34. [34]
    FFT-based sampling rate conversion | IEEE Conference Publication
    The paper presents a method for arbitrary rational sampling rate conversion in the DFT-domain. Compared to the conventional time-domain-base method, ...
  35. [35]
    What You Need to Know about Sample-Rate Conversions
    Jul 1, 2011 · This article focuses on conversions that take place at the distribution and playback end of the chain -- explaining what the terms mean.
  36. [36]
    ASRC - Asynchronous Sample Rate Conversion [updated 7/19/17]
    Jun 30, 2017 · One digital receiver chip which implements ASRC is the Cirrus Logic CS8420. While bit perfect transmission introduces no THD or dynamic range ...
  37. [37]
    Asynchronous Sample Rate Converter IP User Guide
    The solution excels in low latency and low logic resource allocation at professional grade audio quality. Features. The design consists of a polyphase FIR ...
  38. [38]
    [PDF] SAMPLE RATE CONVERSION - XMOS
    Feb 10, 2024 · 48 kHz to 16 kHz and the fixed factor of 3/2 are designed for conversion between 48 kHz and 32 kHz. They have been designed for voice ...
  39. [39]
  40. [40]
    Resampling Filter Design for Multirate Neural Audio Effect Processing
    We investigate several resampling filter designs and show that a two-stage design consisting of a half-band IIR filter cascaded with a Kaiser window FIR filter ...Missing: assisted conversion 2024
  41. [41]
    Resampling filter design for Multirate Neural Audio Effect Processing
    Jun 3, 2025 · We investigate several resampling filter designs and show that a two-stage design consisting of a half-band IIR filter cascaded with a Kaiser ...Missing: assisted conversion 2024