Signal-to-quantization-noise ratio
The signal-to-quantization-noise ratio (SQNR) is a fundamental metric in digital signal processing that quantifies the quality of a digitized signal by measuring the ratio of the original signal power to the power of the noise introduced by quantization, typically expressed in decibels (dB).[1] This ratio assesses the distortion caused when continuous analog signals are approximated to discrete levels in processes like analog-to-digital conversion.[2] SQNR is particularly crucial for evaluating the performance of uniform quantizers, where higher values indicate better fidelity and lower perceptual or functional degradation in applications such as audio encoding and data transmission.[1] Quantization noise originates from the rounding or truncation errors inherent in mapping infinite-precision analog values to a finite set of digital levels, modeled as additive white noise uniformly distributed across each quantization interval.[1] For an N-bit quantizer with step size \Delta (one least significant bit, LSB), the quantization noise power is P_n = \Delta^2 / 12, assuming the error is equally likely between -\Delta/2 and +\Delta/2.[3] This assumption holds under ideal conditions where the input signal spans the full dynamic range and the noise is uncorrelated with the signal, spreading uniformly over the Nyquist bandwidth from DC to half the sampling frequency.[1] The SQNR is calculated as \text{SQNR} = 10 \log_{10} (P_s / P_n), where P_s is the signal power; for a full-scale sinusoidal input in an ideal N-bit ADC, this simplifies to approximately $6.02N + 1.76 dB.[1] Each additional bit of resolution improves SQNR by roughly 6 dB, reflecting a doubling of the number of levels and a halving of the step size.[3] Factors like signal crest factor, oversampling, and dithering can modify this value; for instance, oversampling beyond the Nyquist rate provides process gain, boosting SQNR by $10 \log_{10} (f_s / (2 \cdot \text{BW})) dB, where f_s is the sampling frequency and BW is the signal bandwidth.[1] In practical digital signal processing applications, SQNR guides the design of systems like pulse-code modulation (PCM) for audio, where it ensures acceptable fidelity (e.g., 16-bit audio targets around 98 dB), and in certain communications systems, where values as low as 36 dB may suffice under power constraints.[2] It differs from general signal-to-noise ratio (SNR) by focusing exclusively on quantization as the noise source, though the terms are often used interchangeably when other noises are negligible.[4] Achieving high SQNR involves balancing resolution, dynamic range, and computational efficiency, with real-world deviations from ideal formulas arising from non-uniform signals or converter nonlinearities.[1]Fundamentals
Definition
The signal-to-quantization-noise ratio (SQNR) is a fundamental metric in digital signal processing that quantifies the ratio of the power of a desired signal to the power of the noise introduced specifically by the quantization process during analog-to-digital conversion, typically expressed in decibels (dB).[5] This measure evaluates how effectively a continuous analog signal is approximated by discrete digital levels, where higher SQNR values indicate better preservation of the original signal's integrity.[6] Unlike the general signal-to-noise ratio (SNR), which encompasses all sources of noise in a system, SQNR isolates the distortion arising solely from quantization errors, making it a targeted indicator of digitization quality in applications like audio encoding and sensor data acquisition.[5] Conceptually, SQNR captures the fidelity loss when a continuous signal amplitude is mapped to a finite set of discrete quantization levels, where the error between the original and quantized values manifests as additive noise that degrades signal accuracy.[6] This noise arises during the quantization process, which rounds signal values to the nearest representable level, introducing inaccuracies proportional to the step size between levels.[5] For instance, in a simple 3-bit uniform quantizer with 8 discrete levels spanning a signal range of -1 to 1 (yielding a step size of 0.25), the SQNR for a full-scale sinusoidal input is approximately 20 dB, illustrating a moderate level of quantization-induced degradation suitable for basic illustrative purposes but insufficient for high-fidelity applications.[5]Quantization process
In uniform quantization, the continuous amplitude range of an analog signal, typically from a minimum value V_{\min} to a maximum value V_{\max}, is divided into $2^n discrete quantization levels, where n represents the number of bits used for representation.[7] The step size \Delta between adjacent levels is given by \Delta = \frac{V_{\max} - V_{\min}}{2^n}.[8] This partitioning allows the quantizer to map any input value within the range to one of these finite levels, effectively approximating the original signal with a discrete set of values. The quantization process involves assigning the input signal value x to the nearest quantization level through either rounding or truncation.[9] In rounding, the value is mapped to the closest level, which can be achieved by adding or subtracting up to \Delta/2 to reach the midpoint between levels; truncation, by contrast, simply discards the fractional part beyond the level boundaries.[7] Rounding is generally preferred in signal processing applications because it centers the error distribution around zero, reducing bias in the quantized output.[10] The quantization error e, defined as the difference between the original signal x and its quantized version Q(x), is thus e = x - Q(x).[11] For uniform quantization with rounding, this error is bounded by -\Delta/2 \leq e \leq \Delta/2, ensuring the maximum deviation does not exceed half the step size.[8] The signal-to-quantization-noise ratio (SQNR) serves as a key metric for assessing the impact of this error on overall signal fidelity, as defined in the preceding section. To illustrate, consider a simple 2-bit uniform quantizer with V_{\min} = -1 and V_{\max} = 1, yielding \Delta = 0.5 and four reconstruction levels: -0.75, -0.25, 0.25, and 0.75. The following table shows sample input values and their quantized outputs, along with the resulting errors:| Input x | Quantized Q(x) | Error e = x - Q(x) |
|---|---|---|
| -0.9 | -0.75 | -0.15 |
| -0.4 | -0.25 | -0.15 |
| 0.0 | 0.25 | -0.25 |
| 0.6 | 0.75 | -0.15 |
| 1.0 | 0.75 | 0.25 |
Mathematical formulation
Quantization noise model
In the quantization noise model, the quantization error is treated as an additive noise source superimposed on the original signal, where the error e is assumed to be a uniformly distributed random variable over the interval [- \Delta/2, \Delta/2], with \Delta denoting the quantization step size.[12][13] This assumption posits that the error is independent of the input signal and exhibits zero mean, making it suitable for statistical analysis in signal processing.[14] The noise power, or variance \sigma_q^2, is derived from the uniform probability density function p(e) = 1/\Delta for |e| \leq \Delta/2. Specifically, \sigma_q^2 = \int_{-\Delta/2}^{\Delta/2} e^2 \cdot \frac{1}{\Delta} \, de = \frac{\Delta^2}{12}. This result follows from integrating the second moment of the uniform distribution, providing a foundational metric for quantization-induced distortion.[12][13][14] The additive white noise approximation underlying this model holds under conditions such as high-resolution quantization (many levels, small \Delta), input signals that fully utilize the quantizer range without overload, and smooth input probability densities that ensure the error remains uncorrelated with the signal.[12] For memoryless uniform inputs spanning the quantizer's no-overload region, the model is exact, justifying its use in deriving signal-to-noise ratios.[13] However, the model has limitations, particularly for low-bit-depth quantizers where the number of levels is small, leading to non-uniform error distributions and correlated patterns not captured by the uniform assumption.[12] Additionally, in scenarios with signal-dependent errors or strong correlations, such as in feedback systems or non-stationary inputs, the independence assumption fails, resulting in deterministic rather than random noise behavior that degrades the approximation's accuracy.[13][14]SQNR calculation
The signal-to-quantization-noise ratio (SQNR) is defined as the ratio of the signal power P_s to the quantization noise power \sigma_q^2, expressed in decibels as \text{SQNR} = 10 \log_{10} \left( \frac{P_s}{\sigma_q^2} \right) \ \text{dB}. This general formula quantifies the fidelity of the quantized signal relative to the added noise, assuming the noise is uncorrelated with the signal.[14] For a full-scale sinusoidal signal, the SQNR achieves a specific closed-form expression. Consider a sine wave with amplitude A spanning the full dynamic range of the quantizer, such that the full-scale range (FSR) is $2A. The signal power for this sinusoid is P_s = \frac{A^2}{2}. The quantization step size is \Delta = \frac{2A}{2^n} = \frac{\text{FSR}}{2^n}, where n is the number of bits. Under the uniform noise model, the quantization noise variance is \sigma_q^2 = \frac{\Delta^2}{12} = \frac{(2A / 2^n)^2}{12} = \frac{A^2}{3 \cdot 4^n}. Substituting these into the general SQNR formula yields \text{SQNR} = 10 \log_{10} \left( \frac{A^2 / 2}{A^2 / (3 \cdot 4^n)} \right) = 10 \log_{10} \left( \frac{3 \cdot 4^n}{2} \right) = 10 \log_{10} (1.5) + 10 \log_{10} (4^n). The term $10 \log_{10} (4^n) = 20n \log_{10} 2 \approx 6.02n dB, and $10 \log_{10} (1.5) \approx 1.76 dB, resulting in the standard approximation \text{SQNR} \approx 6.02n + 1.76 \ \text{dB}. This derivation assumes no overload distortion and a uniform distribution of quantization errors.[14] For arbitrary signals, the SQNR can be extended by normalizing the signal power to the quantizer's full-scale range. With \sigma_q^2 = \frac{\text{FSR}^2}{12 \cdot 4^n}, the formula becomes \text{SQNR} \approx 6.02n + 10 \log_{10} \left( \frac{12 P_s}{\text{FSR}^2} \right) \ \text{dB}. Here, the term $10 \log_{10} \left( \frac{12 P_s}{\text{FSR}^2} \right) accounts for the signal's loading factor relative to the quantizer range, which varies with the signal's amplitude distribution (e.g., yielding the 1.76 dB offset for a full-scale sinusoid where P_s = \frac{\text{FSR}^2}{8}). This form allows computation for non-sinusoidal inputs without assuming a specific waveform shape.[9]Influencing factors
Bit depth effects
The signal-to-quantization-noise ratio (SQNR) scales linearly with bit depth in uniform quantization systems, where each additional bit effectively doubles the number of quantization levels, thereby halving the quantization noise power and improving SQNR by approximately 6 dB.[1] This relationship arises because the dynamic range expands logarithmically with the number of bits n, providing finer amplitude resolution and reducing the relative impact of quantization errors on the signal.[15] For a full-scale sinusoidal signal, the baseline SQNR formula yields specific values that illustrate this scaling. The following table compares SQNR for common bit depths:| Bit Depth (n) | SQNR (dB) |
|---|---|
| 8 | 49.9 |
| 16 | 98.1 |
| 24 | 146.2 |