Time-to-digital converter
A time-to-digital converter (TDC) is an electronic circuit or device that measures the time interval between two events, such as the start and stop edges of electrical signals, and converts this duration into a corresponding digital numerical value for further processing.[1][2] TDCs operate by quantifying time differences typically in the range of picoseconds to microseconds, achieving resolutions as fine as hundreds of femtoseconds (e.g., 330 fs) in modern implementations.[3] The basic principle of a TDC involves starting a timing mechanism upon detection of the first event (start signal) and stopping it upon the second event (stop signal), then encoding the elapsed time into a digital code.[1] This process often combines a coarse measurement, such as counting clock cycles from a reference oscillator, with fine interpolation to resolve sub-clock-period intervals, thereby overcoming the limitations of clock frequency alone.[2] Analog TDCs traditionally convert the time interval to a voltage proportional to the duration, which is then digitized using an analog-to-digital converter (ADC), but this approach suffers from issues like temperature sensitivity and poor linearity.[2] In contrast, fully digital TDCs, increasingly implemented in field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), leverage CMOS scaling for improved resolution, lower power consumption, and robustness against process-voltage-temperature (PVT) variations.[3][1] Common TDC architectures include counter-based designs for coarse timing, which achieve resolutions limited by the reference clock period (e.g., 5 ns at 200 MHz), and delay-line methods for finer granularity.[1] Delay-line TDCs propagate signals through chains of delay elements, such as buffers or inverters, where the resolution equals the propagation delay per element (often 50–100 ps in advanced nodes).[2] Vernier delay-line architectures enhance this by employing two delay chains with slightly different propagation times, yielding a resolution equal to their difference (as fine as 1 ps or better).[3] while inverter-based variants can double the effective resolution but introduce encoding complexities like thermometer-to-binary conversion.[1] Key performance metrics for TDCs encompass resolution (least significant bit, LSB), differential non-linearity (DNL), integral non-linearity (INL), dynamic range, power efficiency, and calibration techniques to mitigate metastability and mismatches.[3] TDCs find critical applications in time-of-flight systems for LiDAR and RADAR distance measurements, all-digital phase-locked loops (ADPLLs) for high-speed wireless communications in millimeter-wave bands, high-energy physics experiments requiring picosecond timing precision, and medical imaging such as positron emission tomography (PET).[1][2][3] Recent advancements emphasize hybrid digital architectures that extend input ranges, improve linearity through calibration, and reduce silicon area and conversion time, making TDCs essential in nanoscale CMOS processes for emerging technologies like 5G and beyond. As of 2025, further improvements include sub-picosecond resolutions in FPGA-based designs (e.g., 1.15 ps) and novel methods such as stochastic and noise-shaping techniques.[3][4]Fundamentals
Definition and Principles
A time-to-digital converter (TDC) is an electronic circuit designed to measure the time interval between two events and convert that duration into a proportional digital code.[2][5] This process effectively digitizes an analog quantity—time—by quantizing it into discrete units, assuming familiarity with basic digital logic concepts such as signals and counters.[6] TDCs are essential in systems where precise timing is critical, such as in particle physics experiments or phase-locked loops.[2] The basic operational principle of a TDC revolves around a start signal that initiates the measurement and a stop signal that terminates it, capturing the elapsed time between these events.[5][6] This interval is quantified using mechanisms like reference clocks or delay elements, which divide the time into smaller, measurable segments.[2] The start signal triggers the timing process, while the stop signal encodes the accumulated time into a digital representation, often through sampling or propagation techniques.[5] In essence, the TDC outputs a digital value N that corresponds to the measured time interval \Delta t, quantized in units of the least significant bit (LSB) resolution. This relationship is expressed asN = \frac{\Delta t}{\text{LSB}},
where the LSB represents the smallest resolvable time unit, typically on the order of picoseconds in modern implementations.[2][5] This quantization inherently introduces a discretization error bounded by the LSB size, ensuring the output is a faithful digital approximation of the continuous time domain.[6]
Resolution and Dynamic Range
The resolution of a time-to-digital converter (TDC) refers to the smallest detectable time interval, typically expressed in picoseconds (ps) or femtoseconds (fs), representing the least significant bit (LSB) time step T_{LSB}. This metric is fundamentally limited by the underlying quantization process and is influenced by key design factors such as the reference clock frequency and interpolation techniques. In basic counter-based TDCs, the resolution is determined by the clock period T_{CP}, where T_{LSB} = T_{CP}, as the counter increments at each clock edge; higher clock frequencies thus improve resolution but increase power consumption and design complexity. Interpolation methods, such as delay-line or Vernier architectures, subdivide the clock period to achieve sub-gate-delay resolution, for example, T_{LSB} = T_{CP} / k with interpolation factor k, enabling resolutions down to a few picoseconds in advanced CMOS processes.[7] The dynamic range (DR) of a TDC defines the maximum measurable time interval, providing the span from the minimum resolvable step to the full-scale input, often given by DR = 2^n \times T_{LSB}, where n is the effective bit width or number of quantization levels. This formula arises from the digital output's binary representation, where the counter or delay stages scale the LSB to cover the desired range; for instance, a 10-bit TDC with T_{LSB} = 10 ps yields a DR of approximately 10 ns. In single-stage designs, extending the DR requires more stages or bits, which proportionally increases area and latency, while multi-stage architectures—combining coarse (e.g., counter) and fine (e.g., interpolator) sections—allow wider ranges without excessively degrading resolution. Trade-offs are inherent: pursuing higher resolution typically narrows the DR in fixed-area implementations unless mitigated by multi-stage approaches, which balance the two by allocating coarse measurement for large intervals and fine interpolation for precision, though at the cost of added calibration needs and potential latency. An example of effective resolution enhancement in stochastic TDCs uses statistical averaging, where resolution improves as \sigma / \sqrt{N} (with \sigma as time offset standard deviation and N as arbiter count), akin to binomial distribution approximations for low-probability events in parallel measurements.[7] Quantization noise introduces an inherent uncertainty in TDC measurements due to the discrete nature of time quantization, modeled as uniform error over one LSB with standard deviation \sigma_q = \frac{T_{LSB}}{\sqrt{12}}. This noise floor limits the effective precision, particularly for signals near the resolution limit, and is analogous to that in analog-to-digital converters; for a 5 ps LSB, \sigma_q \approx 1.44 ps. In noise-shaping TDCs, such as delta-sigma architectures, this noise is pushed to higher frequencies via oversampling, improving in-band resolution without reducing T_{LSB}. Overall, these metrics underscore the design challenge of optimizing resolution and DR for specific applications while managing noise and trade-offs.Measurement Techniques
Coarse Measurement Methods
The basic counter method represents a foundational approach in time-to-digital converters (TDCs) for coarse time measurement, where a high-frequency reference clock is used to increment a digital counter from the arrival of a start signal until a stop signal, thereby quantifying the elapsed time through the number of clock cycles. This technique prioritizes simplicity and achieves a wide dynamic range suitable for intervals spanning multiple clock periods, though its inherent resolution is constrained to the clock period T_{clk}, typically on the order of picoseconds to nanoseconds depending on the clock frequency. For instance, with a 1 GHz clock, the resolution is 1 ns, limiting precision for sub-nanosecond events but enabling reliable coverage of longer durations up to the counter's bit depth. In measuring time intervals, the counter method distinguishes between single-shot measurements, which capture a one-time event duration, and period measurements, which assess repetitive signals like oscillator cycles for average timing. The core formula for the interval is \Delta t = N \times T_{clk} + t_{rem}, where N is the integer count from the counter and t_{rem} (< T_{clk}) represents the fractional remainder, often approximated or discarded in coarse implementations to maintain computational efficiency. This approach ensures deterministic operation without requiring complex interpolation, making it ideal for applications demanding robustness over sub-cycle accuracy.[8] To enhance the effective resolution beyond the clock period without hardware modifications, the statistical counter technique employs averaging over multiple M trials of the same interval, leveraging the statistical properties of timing jitter to reduce uncertainty. The effective standard deviation improves as \sigma_{eff} = \frac{T_{clk}}{\sqrt{M}}, allowing, for example, a 10 ns clock to achieve an effective 1 ns resolution after 100 averages, though at the cost of increased measurement time. This method is particularly useful in noisy environments where repeated sampling mitigates deterministic errors.[9] For counter technology, stability of the reference clock is paramount, often achieved using ring oscillators, which generate on-chip high-frequency signals through cascaded inverters, or phase-locked loop (PLL)-derived clocks that synchronize to an external reference for low phase noise and jitter below 1 ps RMS. Ring oscillators provide compact, fully integrated solutions with frequencies up to several GHz in CMOS processes, while PLLs ensure long-term frequency accuracy essential for extended measurement ranges. These technologies enable coarse counters to operate reliably in integrated circuits, sometimes combined briefly with fine methods in hybrid TDCs for overall system performance.[10][11]Fine Measurement Methods
Fine measurement methods in time-to-digital converters (TDCs) enable sub-clock-period resolution, typically in the picosecond range, by interpolating the time interval between start and stop signals using analog or digital techniques that subdivide the reference clock cycle. These approaches complement coarse counting by providing precision within each clock period, often achieving resolutions from tens to a few picoseconds without requiring high-frequency clocks. Common implementations include analog and digital interpolation schemes, which are widely used in applications demanding high timing accuracy, such as particle physics detectors and laser ranging systems. The ramp interpolator employs an analog ramp generator triggered by the start signal, producing a linearly increasing voltage by charging a capacitor with constant current. The stop signal halts the ramp, and the resulting voltage is digitized using an analog-to-digital converter (ADC) or compared to fixed reference levels using an array of comparators. The position indicates the time difference, with resolution determined by the ramp's slope and comparator or ADC precision, often achieving 10 ps in early designs. However, this method is sensitive to voltage nonlinearity and temperature variations in the ramp generator, which can introduce integral nonlinearity errors unless compensated.[12] The Vernier method utilizes two parallel delay lines with slightly different propagation delays, T1 and T2 (where T1 > T2), to measure fine time intervals through coincidence detection. The start signal propagates along the slower line (delay T1 per stage), while the stop signal follows the faster line (delay T2 per stage); the stage at which they coincide encodes the time difference as \Delta t = N \times (T_1 - T_2), where N is the number of stages until coincidence, yielding resolutions as fine as 30 ps in CMOS implementations stabilized by delay-locked loops. This technique provides high resolution over a limited range but requires careful matching of delay differences to minimize metastability. Digital delay-line TDCs consist of a chain of buffers or inverters forming a tapped delay line, where the start signal propagates through the taps, and sampling flip-flops capture the state at the stop signal to encode the phase position. The resolution is given by \frac{T_{\text{clk}}}{N_{\text{taps}}}, where T_{\text{clk}} is the clock period and N_{\text{taps}} is the number of taps, typically achieving sub-10 ps resolution in modern CMOS processes by leveraging gate delays around 10-20 ps. Bubbling or nonlinearity in the delay chain can degrade performance, often mitigated by calibration.[13] Pulse-shrinking techniques iteratively reduce the input pulse width using a chain of delay elements with asymmetric rising and falling delays until the pulse vanishes, with the number of iterations or counter value quantifying the original time interval. This method suits cyclic or repetitive measurements, offering resolutions down to 4.7 ps per stage in CMOS, and is advantageous for all-digital integration without analog components. It excels in low-power scenarios but may suffer from process variations affecting shrinkage uniformity.[14] These fine methods are frequently combined in hybrid TDCs with coarse counters to extend the dynamic range while maintaining picosecond precision across multiple clock cycles.[13]Advanced Measurement Methods
Advanced measurement methods in time-to-digital converters (TDCs) leverage statistical processing, oversampling, and probabilistic elements to attain sub-picosecond resolutions, extending beyond the deterministic limits of delay-based techniques by redistributing noise or exploiting inherent randomness for enhanced precision.[3] Noise-shaping TDCs employ gated-ring oscillators (GROs), in which oscillation is initiated and halted by input signals to accumulate phase shifts proportional to the time interval, converting it into a digital phase count.[15] Delta-sigma (ΔΣ) modulation integrates with the GRO to perform first- or higher-order noise shaping, shifting quantization noise to out-of-band frequencies and allowing oversampling to suppress in-band noise, thereby achieving effective resolutions below 1 ps rms, such as 147 fs in third-order implementations.[15] These methods are particularly effective in applications requiring low in-band noise without extensive calibration.[16] Stochastic TDCs harness thermal noise or device mismatch as a natural dithering source to randomize quantization errors, enabling fine resolution through statistical averaging across multiple measurements.[3] Optimized arbiter selection, often using redundancy like dual time-offset arbiters, enhances linearity by mitigating systematic offsets, yielding resolutions around 360 fs while providing inherent immunity to process-voltage-temperature (PVT) variations via ensemble averaging. Successive approximation (SA) TDCs operate via a binary search algorithm on a chain of delay elements with exponentially scaled taps, iteratively refining the time estimate bit-by-bit to minimize hardware overhead and achieve compact implementations.[17] In hybrid configurations combining time and voltage domains, SA techniques can deliver resolutions as fine as 630 fs, balancing area efficiency with high precision for moderate dynamic ranges.[17] Loop-delay and Vernier hybrid TDCs recycle delay elements in a looped architecture for the coarse stage, paired with a fine Vernier line that exploits differential delay mismatches to extend the measurement range significantly without proportional increases in area or power.[18] This recycling approach maintains sub-10 ps resolutions over wide input spans, such as hundreds of nanoseconds, by reusing the same delay chain multiple times per conversion.[19] These advanced methods are often prototyped in field-programmable gate arrays (FPGAs) to validate designs before ASIC integration.[20]Performance Characteristics
Sources of Errors
Time-to-digital converters (TDCs) are susceptible to several error sources that degrade their linearity and precision, primarily arising from imperfections in the underlying delay elements, clock signals, and environmental factors. These errors manifest as deviations in the measured time interval from the ideal value, impacting applications requiring high temporal resolution such as time-of-flight measurements and phase-locked loops. Understanding these mechanisms is essential for assessing TDC performance limits. Integral nonlinearity (INL) represents the maximum deviation of the TDC's transfer function from an ideal straight line, typically expressed in least significant bit (LSB) units. This error stems from systematic mismatches in the propagation delays of delay-line elements, often due to local process variations during fabrication that cause uneven gate delays across the chain. As a result, INL introduces a bending in the overall characteristic curve, leading to cumulative distortion that can significantly reduce measurement accuracy over the full dynamic range. Differential nonlinearity (DNL) quantifies the variation in individual step sizes of the TDC output code from the average LSB width, defined asDNL = \max_k \left( |LSB_k - LSB_{avg}| \right),
where LSB_k is the width of the k-th step and LSB_{avg} is the average step size, normalized to LSB units. This error originates from local process variations in adjacent delay elements, such as random fluctuations in transistor thresholds or interconnect parasitics, which alter the uniformity of quantization steps. Excessive DNL can result in missing codes or duplicated codes, compromising the TDC's ability to resolve fine time differences reliably. Jitter and noise contribute random fluctuations to the time measurement, encompassing clock phase noise from the reference oscillator and thermal or shot noise in the analog front-end components like comparators. Clock jitter introduces timing uncertainty in the start and stop signals, while thermal noise affects delay element stability, and shot noise arises in charge-based sampling. The total effective noise is often modeled as the root-sum-square combination:
\sigma_{total} = \sqrt{\sigma_{jitter}^2 + \sigma_q^2},
where \sigma_{jitter} is the rms jitter and \sigma_q is the quantization noise, typically LSB / \sqrt{12} for uniform quantization. This combined noise limits the single-shot precision, particularly in high-frequency operations where jitter dominates.[21] Process-voltage-temperature (PVT) variations induce drifts in TDC performance by altering the delay characteristics of circuit elements, such as CMOS inverters in tapped delay lines. Process variations include global effects like wafer doping inconsistencies and local mismatches from lithography, while voltage fluctuations affect threshold voltages and temperature changes modify carrier mobility, collectively causing resolution degradation and gain shifts. These variations can lead to up to several LSB drifts in effective resolution without compensation, exacerbating nonlinearity in deployed systems.