Fact-checked by Grok 2 weeks ago

Undersampling

Undersampling is a technique in in which a continuous-time signal is sampled at a rate lower than the —twice the highest component of the signal—resulting in , where higher-frequency components masquerade as lower frequencies in the sampled data. This phenomenon arises from the Nyquist-Shannon sampling theorem, which states that accurate reconstruction of a signal requires a sampling frequency at least twice its to avoid distortion. While undersampling traditionally poses challenges by introducing artifacts that can degrade signal fidelity, it is often employed intentionally in modern applications such as bandpass or harmonic sampling for high-frequency signals, like intermediate frequencies (IFs) in radio receivers. In these scenarios, the sampling rate must exceed twice the signal's but can be significantly lower than twice its , effectively down-converting the signal through without additional . For example, a 70 MHz signal with a 20 MHz can be accurately captured at 56 MSPS, it to a 14 MHz equivalent, thereby reducing power consumption and system complexity compared to traditional approaches. Key advantages of intentional undersampling include lower () clock rates, which minimize power usage and costs while relaxing timing requirements for data capture in field-programmable gate arrays (FPGAs). It finds prominent use in radar systems, wireless communications, and defense applications, where wideband ADCs with sufficient input bandwidth enable efficient processing of high-frequency signals. However, to prevent unwanted from out-of-band or interferers, bandpass filtering is essential prior to sampling.

Fundamentals

Definition

Undersampling is a technique in which a bandpass-filtered signal is sampled at a rate lower than twice its highest frequency component, yet high enough to ensure that the aliased replicas of the signal do not overlap, allowing faithful of the original signal. This approach intentionally leverages to downconvert the signal to a lower band, reducing the required sampling rate and hardware demands compared to conventional sampling. Unlike general , which involves decimating a low-pass signal after filtering to maintain its baseband representation without distortion, undersampling is specifically tailored for bandpass signals and exploits controlled as a form of frequency translation. The , which requires sampling at least twice the signal's maximum frequency for low-pass signals, is thus relaxed in undersampling scenarios. A practical example illustrates this: consider a bandpass signal centered at 100 MHz with a 10 MHz , spanning 95–105 MHz; instead of sampling at the full of over 200 MS/s, it can be undersampled at 20 MS/s, shifting the spectrum to a lower band without overlap if the rate is appropriately selected. For undersampling to enable distortion-free , the signal must be strictly bandpass in nature, devoid of low-frequency components that could alias into the desired band and corrupt the information.

Historical Context

The foundations of undersampling trace back to early 20th-century developments in sampling theory, where researchers began exploring the representation and of band-limited signals. In 1915, Edmund Taylor Whittaker published work on expansions for functions represented by series, providing initial insights into sampling band-limited signals that would later inform undersampling extensions. built on this in 1928 with his analysis of telegraph transmission, establishing the critical sampling rate and alluding to possibilities for bandpass configurations beyond uniform lowpass assumptions. These contributions in the and highlighted the potential for reduced sampling rates under specific signal constraints, though practical applications remained limited by analog technology. Post-World War II advancements in and communications spurred further exploration of undersampling, particularly for bandpass signals in high-frequency systems. During the , Arthur Kohlenberg formalized second-order sampling techniques in his 1953 paper, demonstrating exact of band-limited functions using offset uniform sampling streams at rates as low as twice the signal , enabling efficient without full Nyquist compliance for centered bandpass spectra. This work marked a shift toward intentional use of in military and communication applications, where hardware constraints favored lower sampling rates for processing in receivers. The 1960s and 1970s saw formalization of undersampling through focused studies on bandpass sampling and intentional for frequency downconversion. Researchers like D. A. in 1959 examined uniform and nonuniform sampling of band-limited signals, laying groundwork for -tolerant methods in communications. By the 1970s, papers such as those presented at the 1970 Telemetering Conference explored noise effects and sampling rate relationships for bandpass signals, promoting deliberate to translate high-frequency components to without additional mixers. These efforts, exemplified in works on sampling, addressed practical challenges in and radio systems, reducing complexity in analog front-ends. Advancements in during the 1980s propelled undersampling toward widespread implementation, with improved algorithms and analog-to-digital converters (ADCs) enabling reliable control. The integration of fast Fourier transforms and filter banks allowed precise reconstruction from undersampled data, as seen in early prototypes. By the late 1980s, commercial ADCs supported bandpass modes, influencing designs in communications receivers and oscilloscopes where undersampling minimized power and hardware costs.

Theoretical Foundations

Nyquist-Shannon Sampling Theorem

The Nyquist–Shannon sampling theorem provides the fundamental criterion for sampling continuous-time signals without loss of information. It states that a continuous-time signal bandlimited to a highest frequency f_{\max} (meaning its Fourier transform is zero for all frequencies above f_{\max}) can be perfectly reconstructed from its discrete-time samples if the sampling rate f_s satisfies f_s \geq 2 f_{\max}, where $2 f_{\max} is termed the . This condition ensures that the discrete samples capture all the information content of the original signal, preventing overlap in the during reconstruction. The theorem, initially proposed by in 1928 for telegraph systems and rigorously proven by in 1949, underpins all by defining the boundary between faithful representation and distortion. The proof of the centers on the unique representation of bandlimited signals in the and their in the . If the signal is sampled at or above the , its spectrum repeats every f_s in the without overlap, allowing ideal low-pass filtering to recover the original spectrum. The time-domain reconstruction is achieved through the , which expresses the continuous signal as an infinite sum of shifted sinc functions weighted by the sample values: x(t) = \sum_{n=-\infty}^{\infty} x(nT) \, \sinc\left( \frac{t - nT}{T} \right), where T = 1/f_s is the sampling interval and \sinc(u) = \sin(\pi u)/(\pi u). This formula, derived from the inverse Fourier transform of the bandlimited spectrum, guarantees exact recovery because the sinc functions form an orthogonal basis for bandlimited signals under the theorem's conditions. Shannon's formulation explicitly ties this to communication theory, showing that the degrees of freedom in a bandlimited signal match the number of samples per Nyquist interval. For signals, whose frequency content lies primarily between 0 and f_{\max}, adherence to the Nyquist–Shannon theorem is essential to avoid irreversible information loss. Undersampling, where f_s < 2 f_{\max}, causes spectral components above the to alias into the lower frequencies, corrupting the signal in a manner that cannot be undone without prior knowledge of the signal's structure. This loss arises because the sampling process inherently assumes the signal is bandlimited to f_s/2; any violation folds higher frequencies indistinguishably, rendering impossible in general. Central to the theorem is the f_N = f_s / 2, which defines the highest frequency component that can be unambiguously represented in the sampled signal. Frequencies exceeding f_N will alias, appearing as lower-frequency imposters due to the periodic replication of the around multiples of f_s. This f_N acts as the folding point, where the signal's begins to mirror itself, emphasizing the theorem's role as the baseline for any sampling strategy, including deliberate undersampling techniques that exploit specific signal properties to mitigate effects.

Aliasing in Undersampling

In undersampling, arises from the periodic replication of the signal's in the , where each replica is centered at multiples of the sampling f_s. Higher-frequency components beyond f_s/2 fold back into the principal frequency band [0, f_s/2], manifesting as lower-frequency aliases that distort the signal unless intentionally managed. The aliased f_{\text{alias}} for an original f is calculated as f_{\text{alias}} = |f - k f_s|, where k is the that places f_{\text{alias}} within [0, f_s/2]. For bandpass signals, which occupy a narrow B = f_H - f_L (with lower edge f_L and upper edge f_H) far from , this folding can beneficially shift the entire spectrum to without irreversible distortion, provided the replicas do not overlap. Unlike signals, where typically causes loss of , controlled in bandpass undersampling translates the high-frequency to a lower alias, enabling reconstruction if the original is preserved and no extraneous spectral components intrude. To prevent overlap, the sampling must satisfy f_s \geq 2B, ensuring the width (twice the , accounting for positive and negative frequencies) fits within the sampling interval without collision. Proper band positioning is critical: the signal band must lie entirely within one of the allowable zones between replicas, avoiding regions where a replica's edge crosses the original band. For instance, overlap occurs if f_L < k f_s / 2 < f_H for k, creating forbidden sampling rates; instead, f_s should place the band such that k f_s / 2 falls outside [f_L, f_H], shifting the spectrum cleanly to . In the frequency-domain representation, the original bandpass spectrum from f_L to f_H appears alongside replicas centered at \pm f_L + n f_s and \pm f_H + n f_s for n. Avoidance zones are visualized as gaps between these replicas where selecting f_s positions the baseband alias (a shifted copy of the original band) without adjacency to other replicas, maintaining spectral integrity for subsequent digital processing or . The minimum sampling rate for lossless undersampling is thus f_{s_{\min}} = 2(f_H - f_L), which captures the bandwidth twice over while leveraging the signal's location to minimize f_s below the conventional of $2f_H.

Implementation Methods

Bandpass Sampling

Bandpass sampling enables the intentional of a signal occupying a high-frequency to a lower-frequency through reduced-rate sampling, provided the signal's is preserved without overlap. This method relies on first isolating the target frequency band using an analog to eliminate out-of-band components that could cause unwanted . The filtered signal, with B, is then sampled at a rate f_s \geq 2B, which is significantly lower than the $2f_h required for the signal's highest frequency f_h, thereby shifting the to the range from 0 to f_s/2. In practice, the hardware setup demands a high-quality positioned before the () to precisely define the and achieve sufficient , typically greater than 60-80 dB, to suppress adjacent interferers. The must exhibit low distortion performance, such as a (SFDR) exceeding 70 , and minimal aperture jitter, especially when sampling at intermediate frequencies (IF) up to several hundred MHz. These requirements ensure that the aliased signal maintains during downconversion to . A representative example involves a signal from 70 MHz to 80 MHz, with a 10 MHz centered at f_c = 75 MHz. Sampling at f_s = 20 MS/s aliases this to 0-10 MHz in the , allowing reconstruction using standard low-pass processing while avoiding spectral overlap. The choice of f_s is critical to optimize performance, particularly in minimizing quantization and to parameter variations. Allowable rates satisfy $2B \leq f_s \leq 2(f_L + B), where f_L is the lower edge, but optimal values position the aliased for balanced distribution. One such selection is f_s = \frac{4 f_c}{2m + 1}, where m is a non-negative integer, centering the aliased spectrum around f_s/4 to approximate quadrature sampling conditions and reduce folding.

Uniform and Non-Uniform Undersampling

Uniform undersampling refers to the process of sampling a bandpass signal at regular intervals using a reduced sampling frequency f_s, where f_s \geq 2B and B denotes the signal's bandwidth, under the assumption of perfect bandpass isolation to avoid aliasing overlaps between positive and negative frequency components. This technique intentionally induces aliasing to translate the high-frequency bandpass signal into the low-frequency baseband (DC to f_s/2) for digital processing, provided the signal's center frequency and bandwidth position satisfy specific conditions to prevent spectral folding. For instance, a signal with bandwidth B = 4 MHz centered at 72.5 MHz can be undersampled at f_s = 10 MSPS, aliasing it to baseband without requiring intermediate frequency conversion. Non-uniform undersampling, in contrast, utilizes irregular sampling times—such as jittered, randomized, or multi-rate patterns—to acquire data from sparse signals, leveraging principles that exploit the signal's sparsity in a transform domain like the frequency spectrum. In this approach, samples are taken at non-equispaced instants, forming a measurement matrix \Phi that captures essential signal information with sub-Nyquist rates, enabling without full uniform coverage. This method extends beyond traditional uniform techniques by accommodating signals that are sparse rather than strictly bandlimited, allowing for effective capture of wideband sparse spectra. A key advantage of non-uniform undersampling lies in its ability to handle signals with much wider effective bandwidths using significantly fewer samples than uniform methods, as the sampling rate can scale with the signal's sparsity level K rather than the full . Reconstruction complexity is addressed through \ell_1-norm minimization, formulated as: \hat{x} = \arg \min \|x\|_1 \quad \text{subject to} \quad y = \Phi x, where y represents the non-uniform samples, x is the sparse signal, and \Phi encodes the irregular sampling instants; this optimization recovers the signal stably if \Phi satisfies the . For example, in sparse spectrum signals—such as those in where only a few bands are occupied—non-uniform undersampling permits an average sampling rate f_s \ll 2B, with B as the total spectrum , reducing data volume while preserving recovery accuracy.

Applications

In Analog-to-Digital Conversion

In analog-to-digital converters (ADCs), undersampling is integrated to enable the of high-frequency signals at reduced sampling rates, thereby lowering the required clock speeds for devices handling frequencies in the hundreds of MHz or higher. This approach aliases the input signal into a lower-frequency band, allowing ADCs with moderate sampling capabilities—such as 10 MSPS—to process (IF) signals up to 72.5 MHz with a 4 MHz , avoiding the need for high-speed clocks that would otherwise demand gigasample-per-second rates. By reducing clock frequencies, undersampling decreases power consumption in the ADC's sampling circuitry and overall system, while also cutting costs by simplifying digital processing requirements and eliminating auxiliary analog components like mixers. Track-and-hold (T/H) circuits in undersampling ADCs are specifically adapted to capture high-frequency inputs with minimal , often using external sample-and-hold amplifiers () to enhance performance. These , such as the AD9100 operating at 30 MSPS, provide high and low aperture jitter, achieving spurious-free dynamic range (SFDR) values up to 72 at 71.4 MHz inputs, which is critical for preserving during the hold phase in undersampling modes. Internal T/H stages in modern ADCs are designed with wide analog input bandwidths to support this, ensuring the circuit can track signals well beyond the without introducing excessive settling errors. A key performance metric in undersampled ADCs is the degradation of (SNR) due to leakage, where noise from adjacent spectral replicas folds into the signal band despite filters. This noise reduces SNR because the aliased noise spectra overlap the desired band, with degradation quantified as D_{\text{SNR}} = 10 \log(n_p) dB, where n_p represents the number of aliased positive-frequency images under the bandpass sampling condition. The (ENOB) in such systems is then calculated from the measured SNR using the formula: \text{ENOB} = \frac{\text{SNR} - 1.76}{6.02} This expression derives from the theoretical quantization noise floor for an ideal ADC, allowing assessment of dynamic range loss in undersampled scenarios. Commercial ADCs from Analog Devices exemplify these capabilities, such as the AD9042, a 12-bit device at 40 MSPS with 80 dB SFDR at 20 MHz inputs, suitable for undersampling 70 MHz IF signals in applications like cellular base stations. For GHz-range operations, the AD9695 offers 14-bit resolution at up to 1.3 GSPS with a 2 GHz analog input bandwidth, enabling direct RF undersampling of wideband signals in communications systems.

In Communications and Radar Systems

In communications systems, undersampling, often implemented as intermediate frequency (IF) sampling or bandpass sampling, enables direct of high-frequency signals in without the need for analog mixers or downconverters. This technique leverages controlled to shift the signal to or a lower IF digitally, simplifying the and reducing components that introduce noise and distortion. By sampling at a rate greater than twice the signal but below the for the carrier frequency, systems achieve efficient downconversion, as detailed in the of bandpass sampling. For instance, in software-defined radios (SDRs), undersampling at rates like 56 MSPS can handle a 20-MHz signal at a 70-MHz IF, eliminating analog stages and lowering power consumption compared to approaches that require higher rates such as 200 MSPS. A practical case study is found in and receivers, where undersampling supports the 200 kHz channel bandwidth with low sampling frequencies, often multiples of 13 MHz but optimized below traditional Nyquist limits. This allows direct IF sampling of the narrowband channels at rates as low as twice the bandwidth (e.g., around 400 kHz minimum), enabling cost-effective digitization while maintaining signal integrity through digital filtering to isolate the desired channel from aliases. Such implementations are common in cellular base stations, where the technique digitizes multiple adjacent channels simultaneously before selective processing. In systems, undersampling is applied to pulses to facilitate range-Doppler while significantly reducing data rates. By employing compressive sensing techniques, such as random , radar echoes from - or phase-modulated pulses (e.g., linear modulated pulses with 200 MHz ) can be sampled at sub-Nyquist rates, like 33.3 MHz instead of 200 MHz, using incoherent measurements that project the signal into a sparse domain. This enables reconstruction of range-Doppler maps via algorithms like basis pursuit denoising, preserving target detection despite ratios of 4:1 or 6:1, though with some SNR trade-offs. The approach lowers the burden on analog-to-digital converters and supports in resource-constrained environments. Undersampling offers key benefits in multi-band communication systems by allowing simultaneous capture of multiple carriers across non-adjacent bands with a single chain. By carefully selecting the sampling (e.g., 58 MHz for bands at 1906 MHz and 2122 MHz), aliases of desired channels fold into accessible regions without overlap, enabling digital separation and reducing the need for multiple parallel ADCs. This enhances flexibility in SDRs supporting diverse services, lowers overall power and hardware costs, and facilitates cooperative multi-system operations.

Advantages and Limitations

Benefits Over Conventional Sampling

Undersampling offers significant reductions in hardware costs compared to conventional Nyquist-rate sampling, as it enables the use of lower-speed analog-to-digital converters (ADCs) that are less expensive and more readily available. High-speed ADCs required for Nyquist sampling of signals can cost thousands of dollars and demand advanced fabrication processes, whereas undersampling allows deployment of low-rate ADCs, potentially cutting costs by orders of magnitude in systems handling gigahertz-range signals. The technique also simplifies receiver architectures by obviating the need for high-frequency mixers or upconverters, which are typically essential in traditional superheterodyne receivers to downconvert signals before sampling. In undersampling schemes, direct of RF signals at sub-Nyquist rates performs implicit frequency translation through , streamlining the and reducing component count, calibration complexity, and potential points of failure. This architectural simplicity is particularly advantageous in compact, integrated systems such as software-defined radios. Furthermore, undersampling enhances efficiency by permitting the capture of wider effective signal bands at the same as Nyquist sampling of narrower bands. For sparse or bandpass signals, the sampling scales with the actual occupied rather than the full , allowing to process multigigahertz spectra using rates as low as one-tenth of conventional requirements without loss of information. consumption benefits substantially from these lower rates; for instance, reducing the sampling to 10% of Nyquist can decrease overall by a similar factor, alleviating thermal management issues in high-performance applications like .

Potential Drawbacks and Mitigation

Undersampling techniques are particularly sensitive to imperfections in the , which in bandpass sampling applications must precisely reject signals from adjacent Nyquist zones to prevent leakage into the band of interest. Unlike low-pass filters used in conventional , these bandpass filters are more challenging to design with sharp characteristics, allowing unwanted components to fold into the desired and corrupt the signal. Quantization noise in undersampled systems is exacerbated as noise from multiple aliased bands folds into the , elevating the overall and degrading (SNR). This effect is quantified by the SNR degradation D_{\text{SNR}} \approx 10 \log(n_p) , where n_p is the number of aliased bands contributing noise, or more generally D_{\text{SNR}} = 10 \log(B_{\text{EA}} / (f_s / 2)) with B_{\text{EA}} as the equivalent noise bandwidth. The total in-band noise power can be expressed as N = n_i + (n_p - 1) n_o, where n_i and n_o are the in-band and out-of-band noise power densities, respectively. Phase noise from the sampling clock is amplified in undersampling by the ratio of input to sampling frequency, with the single-sideband phase noise increase given by \Delta = 20 \log(f_{\text{IN}} / f_s) . This amplification arises because clock causes non-uniform sampling instants, transferring and magnifying low-frequency clock noise to the output , which is critical in high-frequency applications requiring low (e.g., <1 rms for 70-80 SNR at 70 MHz). For multi-tone signals, undersampling imposes limitations on , as aliases from strong tones in adjacent bands can intermodulate and mask weaker components in the desired band, reducing (SFDR) and (ENOB). This is evident in receivers where SFDR drops significantly above f_s/2, often requiring 70-80 isolation to maintain performance across tones. The aliased noise power P_{\text{alias}} resulting from filter imperfections can be calculated as the integral over the out-of-band spectrum weighted by the filter response: P_{\text{alias}} = \int_{f_s/2}^{\infty} |H(f)|^2 S_n(f) \, df where H(f) is the analog anti-aliasing filter magnitude response and S_n(f) is the input noise power spectral density. To mitigate these issues, adaptive filtering techniques can be applied post-sampling to suppress residual aliasing artifacts by dynamically adjusting digital filters based on estimated interference. Dithering, involving the addition of broadband noise (typically 0.5 LSB rms) to the input, randomizes quantization errors, decorrelates them from the signal, and improves SFDR by up to 14 dB in undersampled systems. Hybrid oversampling-undersampling schemes, such as bandpass sigma-delta modulators, combine internal oversampling for noise shaping with external undersampling to shift quantization noise away from the passband, achieving effective SNR improvements (e.g., 65 dB for a 455 kHz IF) while preserving bandwidth efficiency.

References

  1. [1]
    Oversampling and Undersampling - Analog Devices
    Sampling with a clock frequency low enough to cause aliasing is known as undersampling. In the early days of sampled data systems the input signal was almost ...
  2. [2]
    [PDF] Why Oversample when Undersampling can do the Job?
    1.2. What is Undersampling? If we use the sampling frequency less than twice the maximum frequency component in the signal, then it is called undersampling.
  3. [3]
  4. [4]
    What are sampling rate, bandwidth and throughput? - DEWETRON
    May 30, 2023 · Undersampling (or bandpass sampling) is the process of sampling a bandpass-filtered signal at a sampling rate lower than its Nyquist rate, but ...
  5. [5]
    [PDF] SECTION 5 UNDERSAMPLING APPLICATIONS - Analog Devices
    In a receiver which uses direct IF-to-digital techniques (often called undersampling, harmonic, bandpass, or IF sampling), the IF signal is applied directly to ...
  6. [6]
    Undersampling of Modulated Signals - RF Cafe
    There is another option called undersampling (sometimes called bandpass sampling, or super-Nyquist sampling) whereby the aliasing phenomenon is exploited to ...
  7. [7]
    (2) f(t) = j^f_j(x)e'x'dx - Project Euclid
    This does turn out to be the case, as we shall now see in sketching two of the main approaches to bandpass sampling (for a third, see J. L. Brown (1980)).
  8. [8]
    The theory of bandpass sampling | IEEE Journals & Magazine
    Abstract: The sampling of bandpass signals is discussed with respect to band position, noise considerations, and parameter sensitivity.
  9. [9]
    Exact Interpolation of Band‐Limited Functions - AIP Publishing
    The result is used to show that a function which lies in a frequency band (W0, W0+W) is completely determined by its values at a properly chosen set of points ...
  10. [10]
    Proceedings, ITC/USA '70
    aliasing noise on the frequency location of bandpass signals has been discussed, and a relationship between the signal frequency and the sampling rate was ...
  11. [11]
    [PDF] Certain topics in telegraph transmission theory
    The paper discusses frequency bands, minimum band width, distortionless transmission, steady-state characteristics, and the relationship between frequency band ...
  12. [12]
    [PDF] Communication In The Presence Of Noise - Proceedings of the IEEE
    Using this representation, a number of results in communication theory are deduced concern- ing expansion and compression of bandwidth and the threshold effect.
  13. [13]
    [PDF] Sampling theorem - Purdue Engineering
    '.3 THE EFFECT OF UNDERSAMPLING: ALIASING. In previous sections in this chapter, it was assumed that the sampling frequency was sufficiently high that the ...
  14. [14]
    [PDF] Sampling and Aliasing - NJIT
    When we sample at a rate which is less than the Nyquist rate, we say we are undersampling and aliasing will yield misleading results. • If we are sampling a 100 ...
  15. [15]
  16. [16]
    Basics of Band-Limited Sampling and Aliasing - Analog Devices
    Sep 25, 2005 · This article presents a theoretical approach for sampling and reconstructing a signal without losing the original contents of the signal.Missing: mechanism | Show results with:mechanism
  17. [17]
    Issues and New Results on Bandpass Sampling - MDPI
    Jan 8, 2024 · This study presents issues and new results on bandpass sampling. First, some issues on the relationships between the range of allowable sampling frequencies ...
  18. [18]
    Multirate DSP, part 4: Bandpass undersampling - EE Times
    The sampling theorem ensures the complete reconstruction of the analog signal without aliasing distortion. In some applications, such as modulated signals in ...
  19. [19]
  20. [20]
  21. [21]
    [PDF] Compressive sensing off the grid - People
    compressive sampling. Indeed non-uniform sampling allows us to effectively sample the signal at a sub-Nyquist rate. For array signal processing applications ...
  22. [22]
    [PDF] Potential use of the undersampling technique in the acquisition of ...
    In this work bandpass sampling applied to NMR and. MRI signals in the RF band is presented and analyzed, evaluating the advantages and disadvantages of this.Missing: seminal | Show results with:seminal
  23. [23]
    [PDF] Taking the Mystery out of the Infamous Formula, "SNR=6.02N + 1.76 ...
    You can compare the actual ADC. SNR with the theoretical SNR and get an idea of how the ADC stacks up. This tutorial first derives the theoretical quantization ...
  24. [24]
    [PDF] AD9695 | Data Sheet - Analog Devices
    Wide, full power bandwidth supports intermediate frequency (IF) sampling of signals up to 2 GHz. 4. Buffered inputs ease filter design and implementation. 5 ...
  25. [25]
    [PDF] Intermediate Frequency (IF) Sampling Receiver Concepts
    Examples will be based on the GSM/EDGE communications standard where the channel bandwidth is 200 kHz and the sample rate is typically a multiple of 13 MHz.
  26. [26]
    [PDF] Detection Performance of Compressively Sampled Radar Signals
    In this paper, we study the detection performance of signals acquired in an undersampled manner via random projections. We compare detection performance for ...<|separator|>
  27. [27]
    [PDF] STUDY ON A MULTIPLE SIGNAL RECEIVER USING ...
    Undersampling is a potential method for simultaneous receiving of multi-band and multiple signals by selecting suitable sampling frequency. However, the ...
  28. [28]
    [PDF] Sub-Nyquist Radar via Doppler Focusing - arXiv
    Sep 22, 2013 · Systems exploiting sub-Nyquist sampling rates benefit from a lower rate analog-to-digital conversion (ADC), which requires less ...
  29. [29]
    [PDF] Xampling: Signal Acquisition and Processing in Union of Subspaces
    Mar 8, 2011 · Commercial ADC devices are often specified with front-end bandwidth that is not much wider than twice their sampling rate capabilities. [8].<|control11|><|separator|>
  30. [30]
    None
    ### Summary on Undersampling Eliminating Need for Mixers or Upconverters in Receivers
  31. [31]
    Selecting Mixed-Signal Components for Digital Communications ...
    Oversampling offers several attractive advantages (Figure 2). The higher sampling rate may significantly ease the transition band requirements of the anti- ...Missing: benefits carriers
  32. [32]
    [PDF] CLOSE-IN SPURS IN DIGITAL RECEIVER ( ) ( ) [ ] ( ) [ ] - IMEKO
    The noise or spur side-bands of the sample clock is amplified by the input-to-sampling frequency ratio. This rule is important particularly in the under- ...