Pan–Tompkins algorithm
The Pan–Tompkins algorithm is a real-time digital signal processing method for detecting QRS complexes in electrocardiogram (ECG) signals, designed to identify the characteristic sharp deflections representing ventricular depolarization in the heart's electrical activity. Developed in 1985 by biomedical engineers Jiapu Pan and Willis J. Tompkins, it processes ECG data through a sequence of filters and operations—bandpass filtering to isolate QRS frequencies, differentiation to emphasize slope changes, squaring to highlight peaks, and moving-window integration to assess width—followed by adaptive thresholding to locate R-waves with minimal false detections.[1] This approach enables efficient, low-computational analysis suitable for clinical monitoring and automated diagnostic systems.[2]
The algorithm was specifically created to address challenges in real-time ECG analysis, such as noise from muscle artifacts, power-line interference, and baseline wander, which can obscure QRS features in ambulatory or Holter monitoring applications. By employing integer arithmetic on microprocessors like the Z80 or NSC800, it achieves hardware-efficient implementation without requiring floating-point operations, making it practical for embedded medical devices. Its adaptive thresholds dynamically adjust to variations in QRS morphology and heart rate, reducing the need for manual tuning and improving reliability across diverse patient populations.[1]
Key components include a bandpass filter with a 5–12 Hz passband to enhance QRS energy while attenuating irrelevant frequencies, a five-point derivative filter for slope extraction, nonlinear squaring to amplify QRS peaks relative to slower waves, and a 150 ms integration window to capture the complex's duration. Decision rules incorporate dual thresholds—one for refractory periods post-detection and a lower one for search-back in noisy segments—to handle missed beats or premature ventricular contractions. These elements collectively prioritize sensitivity without excessive false positives, distinguishing it from earlier analog-based detectors.[1]
Widely regarded as a benchmark for QRS detection, the Pan–Tompkins algorithm demonstrated 99.3% accuracy on the MIT/BIH Arrhythmia Database, with only 0.239% false negatives and 0.437% false positives across over 116,000 beats from 48 half-hour excerpts. It has become a foundational technique in cardiac arrhythmia monitoring, wearable ECG devices, and extensions to fetal ECG analysis, influencing subsequent algorithms that build upon its filtering and thresholding framework.[1][3]
Introduction
Overview
The Pan–Tompkins algorithm is a real-time method for detecting QRS complexes in electrocardiogram (ECG) signals, enabling the identification of individual heartbeats through digital analysis of signal slope, amplitude, and width.[2] Developed by Jiapu Pan and Willis J. Tompkins, it was introduced in a 1985 publication in the IEEE Transactions on Biomedical Engineering.[2] The algorithm is designed for applications such as arrhythmia monitoring and automated ECG analysis, where accurate and timely heartbeat detection is essential.[2]
It takes as input a raw digital ECG signal, typically sampled at 200 samples per second, and produces as output a series of timestamps or markers indicating the locations of R-peaks or QRS complex onsets.[2] The overall workflow begins with preprocessing stages that enhance the QRS-related features while suppressing noise, followed by rule-based decision mechanisms to classify and confirm detected peaks.[2]
In evaluations on the MIT/BIH Arrhythmia Database, the algorithm achieves a detection sensitivity of 99.3%, with a false negative rate of 0.239% and a false positive rate of 0.437%, demonstrating high positive predictivity and suitability for real-time processing.[2]
Historical Development
The Pan–Tompkins algorithm was developed in 1985 by Jiapu Pan and Willis J. Tompkins at the University of Wisconsin–Madison, addressing the need for an automated, real-time method to detect QRS complexes in electrocardiogram (ECG) signals.[2] Prior to this, manual annotation of ECG data was labor-intensive, particularly for clinical analysis and long-term Holter monitoring, where false detections could compromise arrhythmia diagnosis and ambulatory patient monitoring.[1] The algorithm was motivated by the requirement for a reliable, computationally efficient approach suitable for microprocessor implementation, enabling applications in arrhythmia monitors and reducing reliance on human intervention.[2]
The original algorithm was detailed in the seminal paper "A Real-Time QRS Detection Algorithm," published in IEEE Transactions on Biomedical Engineering, Volume BME-32, Number 3, March 1985, pages 230–236.[2] Initial evaluation involved 48 half-hour ECG excerpts from the MIT-BIH Arrhythmia Database, comprising 109,514 beats from diverse patient recordings.[1][4] The algorithm achieved a 99.3% QRS detection rate, with a total failure rate of 0.675% (0.437% false positives and 0.239% false negatives), demonstrating high sensitivity and positive predictivity even in noisy conditions.[2]
Following its publication, the Pan–Tompkins algorithm rapidly influenced ECG processing in the 1980s and 1990s, establishing a benchmark for QRS detection due to its simplicity, real-time performance, and accuracy.[5] It was implemented in early microprocessor-based systems, such as Z80 assembly language programs, and inspired modifications like adaptive thresholding by Hamilton and Tompkins in 1986, which further improved robustness.[5][6] This early adoption shaped subsequent software and hardware for clinical cardiology and heart rate variability analysis, becoming a foundational method in biomedical engineering.[2]
Signal Preprocessing
Bandpass Filtering
The bandpass filtering stage in the Pan–Tompkins algorithm serves as the initial preprocessing step to mitigate various sources of noise in electrocardiogram (ECG) signals, thereby enhancing the signal-to-noise ratio and facilitating accurate QRS complex detection. Specifically, it attenuates low-frequency baseline wander caused by respiration or electrode movement (typically below 5 Hz) and high-frequency noise such as power line interference at 50–60 Hz or muscle artifacts above 25 Hz, while preserving the spectral content associated with the QRS complex. This noise reduction allows for the use of lower detection thresholds in subsequent stages, improving overall sensitivity without increasing false positives.[7]
The filter is designed as a recursive digital bandpass filter with integer coefficients, constructed by cascading a high-pass filter to eliminate baseline drift and a low-pass filter to suppress high-frequency noise. The approximate 3 dB passband is 5–12 Hz, targeted to align with the dominant frequency range of QRS complexes (5–15 Hz), ensuring that the steep slopes of the Q, R, and S waves are emphasized relative to slower P and T waves. In implementations suitable for offline analysis, this is often realized as a bidirectional infinite impulse response (IIR) filter using forward-backward processing to achieve zero-phase distortion, avoiding signal shifts that could misalign fiducial points; however, the original real-time design employs causal recursive filtering for microprocessor efficiency.[7][2]
The mathematical formulation approximates the bandpass transfer function through these cascaded components, assuming a sampling rate of 200 Hz. The low-pass filter, with a cutoff around 11–12 Hz and a gain of 36, follows the difference equation:
y_{LP} = 2y_{LP}[n-1] - y_{LP}[n-2] + x - 2x[n-6] + x[n-12]
This introduces a delay of 6 samples. The high-pass filter, with a cutoff around 5 Hz and a gain of 32, uses:
y_{HP} = 32x[n-16] - y_{HP}[n-1] - x + x[n-32]
with a delay of 16 samples. The overall bandpass output is the cascade y_{BP} = y_{LP}(y_{HP}(x)), enabling integer arithmetic for low computational cost in real-time applications. These equations were derived to match the QRS spectrum while minimizing hardware requirements on devices like the Z80 microprocessor.[7][2]
This filtering step effectively amplifies the high-frequency components of the QRS complex's slope, making it stand out against attenuated P and T waves, baseline artifacts, and noise, thus preparing the signal for time-domain enhancement in the derivative stage. The 5–15 Hz passband parameters were selected based on spectral analysis of QRS energy in standard ECG databases, ensuring robust performance across varying heart rates and noise conditions.[7][2]
Differentiation
The differentiation step computes the first-order time derivative of the bandpass-filtered ECG signal to provide slope information on the QRS complex, thereby emphasizing its steep transitions relative to the gentler slopes of the P and T waves.[2]
This enhances the high-frequency content associated with the rapid ventricular depolarization in the QRS complex while attenuating lower-frequency components from slower atrial and repolarization waves.[2]
The derivative is approximated using a simple five-point digital differentiator with the difference equation
y(nT) = \frac{1}{8T} \left[ -x(nT - 2T) - 2x(nT - T) + 2x(nT + T) + x(nT + 2T) \right],
where x denotes the input samples, y the output derivative signal, n the discrete time index, and T the sampling interval. This filter is non-causal due to the use of future samples and introduces a two-sample delay to enable real-time processing.[2]
The corresponding transfer function is H(z) = \frac{1}{8T} (-z^{-2} - 2z^{-1} + 2z + z^{2}), which yields a nearly linear amplitude response from DC to 30 Hz and introduces a two-sample delay.[2]
The output exhibits positive peaks corresponding to upward slopes (such as the Q-to-R transition) and negative peaks for downward slopes (such as the R-to-S transition), thereby amplifying QRS-related changes while suppressing flat or slowly varying segments.[2]
This differentiator is tailored for ECG sampling rates of 200 samples per second, as used in the original implementation with 12-bit analog-to-digital conversion hardware prevalent in the mid-1980s.[2]
Squaring
The squaring step in the Pan–Tompkins algorithm performs a nonlinear transformation on the differentiated ECG signal to rectify negative values and emphasize the magnitude of slope changes, particularly those associated with the QRS complex. This operation discards directional information from the derivative, allowing the algorithm to focus on the absolute steepness of waveform transitions, which is crucial for distinguishing the prominent QRS slopes from shallower features like P or T waves. By converting the signal into a purely positive domain, squaring enhances the detectability of QRS-related peaks while mitigating the impact of biphasic derivative patterns.[2]
The method entails point-wise squaring of each sample in the differentiated signal, a straightforward computation that applies the transformation instantaneously without requiring windowing or additional filtering. Mathematically, this is formulated as
s = (d)^2
where d represents the output of the differentiation stage at discrete time index n. This equation ensures all output values are non-negative, with the nonlinear amplification disproportionately boosting larger derivative magnitudes—typically those exceeding 1 in absolute value—while attenuating smaller ones.[2]
A primary effect of this step is the transformation of the QRS complex's derivative, which often exhibits biphasic (positive-negative) excursions due to the Q-R-S sequence, into a unimodal positive pulse centered around the R-wave slope. This unimodal shape simplifies downstream processing for peak identification and further suppresses low-amplitude noise, as squaring reduces the relative influence of small fluctuations (e.g., baseline wander remnants or high-frequency artifacts) compared to the amplified QRS signals. Additionally, the operation steepens the frequency response curve of the preceding derivative filter, helping to limit false positives from T-waves that might otherwise carry elevated spectral energy.[2]
From a computational perspective, the squaring operation is highly efficient, relying solely on multiplication that can be implemented using integer arithmetic on low-power microprocessors. This simplicity makes it well-suited for real-time ECG analysis in resource-constrained embedded systems, such as portable monitors, without introducing significant processing delays or hardware demands.[2]
Moving Window Integration
The moving window integration serves as the concluding preprocessing stage in the Pan–Tompkins algorithm, integrating the squared derivative signal over a brief temporal window to approximate the QRS complex duration while attenuating high-frequency noise components.[2]
This integration is performed as a running sum across a 150 ms window, equivalent to approximately 30 samples at the 200 Hz sampling rate specified in the original implementation. The window size is selected to align with the typical QRS duration of 80–120 ms observed in adult ECGs, ensuring comprehensive coverage of the complex without excessive broadening that could obscure distinctions from adjacent features.[2]
The mathematical formulation computes the integrated signal i as the mean of the squared signal s over the window:
i = \frac{1}{30} \sum_{k=0}^{29} s[n-k]
For efficient real-time computation, a recursive approximation is employed:
i = i[n-1] + \frac{1}{30} (s - s[n-30])
This approach minimizes arithmetic operations by leveraging prior values.[2]
The resulting signal features broadened QRS peaks spanning roughly 150 ms, with diminished noise, thereby enhancing the signal's suitability for subsequent threshold-based peak identification. The squared signal, which emphasizes instantaneous slope magnitudes, forms the direct input to this integration.[2]
QRS Detection Rules
Fiducial Mark Placement
Fiducial mark placement in the Pan–Tompkins algorithm serves to locate candidate R-wave peaks as reference points, or fiducial marks, for determining heartbeat timing within the ECG signal. These marks approximate the onset or center of the QRS complex, enabling subsequent analysis of cardiac rhythm and arrhythmia detection. The process relies on the preprocessed integrated signal, which emphasizes the QRS complex's characteristic slope and duration following bandpass filtering, differentiation, squaring, and moving-window integration.[1]
The method involves identifying peaks in the integrated signal that surpass the adaptive threshold, with the fiducial mark determined from the rising edge of the integrated waveform, corresponding to the R-wave peak in the original ECG signal—typically by locating the point of maximal slope or searching back a fixed number of samples from the integrated peak. This step transforms the envelope-like integrated waveform into discrete markers aligned with ventricular depolarization events. By focusing on features of this signal, the algorithm achieves high sensitivity to QRS features while minimizing interference from baseline wander or noise.[1]
In the detection process, the integrated signal i is examined sample by sample. A peak in i exceeding the threshold triggers placement of the fiducial mark at the corresponding R-peak position in the original signal. This criterion ensures precise localization of prominent R-wave candidates without requiring complex feature extraction.[1]
These fiducial marks act as the initial anchors in the overall workflow, facilitating the classification of detected events and serving as points for interval measurements like RR intervals. As the algorithm processes incoming ECG data, the marks are iteratively refined based on confirmed QRS detections, supporting continuous real-time monitoring.[1]
To address scenarios with multiple candidate peaks from a single QRS complex, the algorithm enforces a refractory period of 200 ms after the previous fiducial mark, during which only the highest qualifying peak is selected as the valid fiducial. This mechanism avoids duplicate markings for the same beat, enhancing detection reliability in noisy or variable ECG recordings.[1]
Threshold Setting
The threshold setting in the Pan-Tompkins algorithm utilizes adaptive mechanisms to dynamically adjust detection thresholds, enabling robust QRS complex identification amid ECG signal variability, such as amplitude fluctuations from patient motion, electrode shifts, or physiological changes. These thresholds float above noise levels while tracking genuine QRS peaks, thereby minimizing false detections and missed beats across diverse signal conditions.[2]
Thresholds are initialized during an initial 2-second learning phase, starting at low values to broadly capture early signal and noise features without prior assumptions about ECG morphology. Two parallel threshold sets are employed: one for the bandpass-filtered ECG waveform (THRESHOLD_F1 and THRESHOLD_F2) and one for the moving-window integration output (THRESHOLD_I1 and THRESHOLD_I2), reflecting the algorithm's multi-stage feature extraction. Signal peak estimates (SPKF for filtered, SPKI for integration) and noise peak estimates (NPKF, NPKI) are established from peaks detected in this phase, serving as the foundation for ongoing adaptations. The algorithm maintains two exponentially weighted moving averages of the RR interval: RR-AVERAGE1 (short-term, updated with weight 1/16 for rapid adaptation to arrhythmias) and RR-AVERAGE2 (long-term, weight 1/64 for stable rhythms). Irregular rhythms are detected when the new RR interval exceeds 1.66 × RR-AVERAGE1 or is less than 0.66 × RR-AVERAGE1, triggering enhanced sensitivity.[2]
Updates to these estimates occur continuously using exponentially weighted moving averages, with a primary weighting of 0.125 for the current peak and 0.875 for the prior estimate to ensure gradual adaptation. For a peak classified as a QRS signal in the integration waveform (PEAKI), the signal estimate updates as:
\text{SPKI}(n) = 0.125 \times \text{PEAKI} + 0.875 \times \text{SPKI}(n-1)
If the peak is deemed noise, the noise estimate follows analogously:
\text{NPKI}(n) = 0.125 \times \text{PEAKI} + 0.875 \times \text{NPKI}(n-1)
Identical updates apply to the filtered waveform using PEAKF, SPKF, and NPKF. The primary signal threshold for integration (THRESHOLD_I1) is then computed relative to noise levels:
\text{THRESHOLD\_I1} = \text{NPKI} + 0.25 \times (\text{SPKI} - \text{NPKI})
The secondary threshold, used for lower-amplitude events, is half this value (THRESHOLD_I2 = 0.5 × THRESHOLD_I1), with corresponding formulas for THRESHOLD_F1 and THRESHOLD_F2. In cases of irregular heart rates or potential low-amplitude QRS detections via the secondary threshold, updates accelerate by doubling the weight on the current peak (e.g., SPKI = 0.25 × PEAKI + 0.75 × SPKI), and thresholds may halve temporarily to enhance sensitivity. These mechanisms ensure thresholds decay toward noise baselines during quiet periods while rising with prominent QRS activity, maintaining a dynamic 1.25:1 ratio between signal and noise influences.[2]
A 200 ms refractory period follows each QRS detection, during which no new detections are permitted, preventing redundant triggers from prolonged QRS slopes or artifacts and stabilizing threshold updates by avoiding clustered false positives. This period aligns with physiological constraints, as no two QRS complexes can occur closer than approximately 200 ms in normal rhythms.[2]
Search Back Mechanism
The search back mechanism in the Pan–Tompkins algorithm serves to recover QRS complexes that may have been missed during initial forward detection due to high adaptive thresholds, particularly in scenarios involving arrhythmias or degraded signal quality where low-amplitude events fail to exceed the primary signal threshold.[1] This retrospective procedure enhances the algorithm's sensitivity by addressing false negatives without compromising specificity in normal conditions.[8]
The mechanism is triggered when no QRS complex is detected within a time window equivalent to 1.66 times the current average RR interval (using RR-AVERAGE2), which approximates the expected duration for up to eight beats assuming a heart rate of 60 beats per minute.[1] Upon expiration of this search interval—following an initial 200 ms refractory period after the last confirmed beat—the algorithm initiates a search back over the interval since the last detection, identifying the maximum peak exceeding the secondary threshold (e.g., THRESHOLD_I2).[1] If this peak aligns with physiologically plausible timing relative to the prior beat (e.g., avoiding intervals shorter than the refractory period), it is classified as a valid QRS fiducial point.[1] This temporary threshold relaxation leverages the ongoing adaptive thresholding process to focus on subtle features that were borderline in the forward pass.[1]
Upon confirmation of a beat via search back, the average RR interval is immediately updated using the new interval measurement, resetting the search parameters and accelerating threshold adaptations to twice the normal rate for subsequent cycles to better track potential ongoing irregularities.[1] This dynamic update helps mitigate cumulative errors in noisy segments by restoring nominal detection sensitivity promptly.[1]
The search back mechanism contributes significantly to the algorithm's overall performance, achieving high detection rates on the MIT/BIH Arrhythmia Database, particularly for low-amplitude QRS complexes as seen in bundle branch block recordings, where it reduces missed beats and supports an aggregate sensitivity exceeding 99%.[1]
T-Wave Discrimination
The T-wave discrimination rule in the Pan-Tompkins algorithm serves to prevent false positive QRS detections caused by tall T-waves, which can mimic QRS complexes particularly in conditions like ventricular hypertrophy or when R-R intervals are short (less than 360 ms). This step is crucial for maintaining high detection accuracy, as large T-waves may produce peaks in the preprocessed integrated signal that resemble those of QRS complexes.[2]
The discrimination process activates for candidate peaks occurring after a 200 ms refractory period following the previous QRS but within 360 ms of it. In such cases, the algorithm evaluates the maximal slope of the candidate waveform in the integrated signal; if this slope is less than half the slope of the preceding QRS complex, the candidate is classified as a T-wave and rejected. This slope-based criterion leverages the sharper, steeper profile of true QRS complexes compared to the more rounded shape of T-waves, where the derivative signal indicates lower peak values relative to the integrated signal peak (typically less than 0.5 times the signal peak). An additional check compares the derivative-to-integrated signal ratio against a noise threshold to further confirm the rounded morphology indicative of a T-wave, ensuring rejection if the ratio falls below half the typical value for a QRS.[2]
Introduced in the original 1985 formulation of the algorithm, this T-wave discrimination mechanism addressed potential false detections from T-wave interference, contributing to an overall QRS detection sensitivity and positive predictivity of 99.3% on the MIT/BIH Arrhythmia Database. Without such shape analysis, preliminary tests showed elevated false positive rates of approximately 5-10% in noisy or abnormal ECGs, highlighting the rule's role in robust real-time performance.[2]
Implementation and Applications
Algorithm Implementation
The Pan–Tompkins algorithm is typically implemented for ECG signals sampled at 200 Hz, as specified in the original design, but adaptations for higher rates like 360 Hz—common in databases such as MIT-BIH—are achieved through downsampling or filter coefficient adjustments to maintain the 5–11 Hz passband.[1][9] For rates exceeding 360 Hz, downsampling to 200–360 Hz is recommended to avoid aliasing and ensure filter efficacy without excessive computational overhead.[10]
The algorithm exhibits O(N) linear time complexity, where N is the number of samples, due to its sequential filtering and thresholding operations, making it efficient for resource-constrained devices.[11] On 1980s-era microprocessors like the Z80, the implementation in assembly language enabled real-time processing at 200 Hz, demonstrating its suitability for embedded systems even with integer arithmetic.[1]
A high-level pseudocode outline for the algorithm processes an input ECG signal to produce a list of R-peak indices, integrating preprocessing and detection in a loop over samples:
[function](/page/Function) r_peaks = pan_tompkins(ecg_signal, fs)
% Initialize buffers and thresholds
N = [length](/page/Length)(ecg_signal)
bandpass_signal = zeros(N, 1)
differentiated = zeros(N, 1)
squared = zeros(N, 1)
integrated = zeros(N, 1)
rr_avg1 = 0 % Short-term average [RR](/page/RR) interval
rr_avg2 = 0 % Long-term average [RR](/page/RR) interval
qrs_threshold1 = 0
qrs_threshold2 = 0
noise_threshold = 0
search_back_window = 0
% Step 1: Bandpass filtering (5-11 Hz, causal IIR)
for i = 1 to N
bandpass_signal(i) = apply_bandpass_filter(ecg_signal(i), previous_states)
% Low-pass: y(n) = [2*y(n-1) - y(n-2) + x(n) - 2*x(n-2) + x(n-4)] / 8 + gain adjustments
% High-pass: difference equation for slope approximation
end
% Step 2: [Differentiation](/page/Differentiation)
for i = 5 to N % Adjusted for 5-point window
differentiated(i) = [-bandpass_signal(i-2) - 2*bandpass_signal(i-1) + 2*bandpass_signal(i) + bandpass_signal(i+1)] / 8
% 5-point [derivative](/page/Derivative) approximation (causal version may use delayed samples)
end
% Step 3: Squaring
squared = differentiated .^ 2
% Step 4: Moving window integration (window width ~0.15 s, e.g., 30 samples at 200 Hz)
for i = window_width to N
integrated(i) = sum(squared(i-window_width+1 : i)) / window_width
end
% Step 5: QRS detection with adaptive thresholds and rules
r_peaks = []
for i = 1 to N
signal_i = integrated(i)
% Update thresholds based on recent peaks (refractory period ~200 ms)
if signal_i > qrs_threshold1 and i - last_rr > 200/fs
% Potential QRS: check slope and noise
if is_QRS_candidate(signal_i, bandpass_signal(i), noise_threshold)
r_peaks.append(i)
% Update RR averages and thresholds
if [length](/page/Length)(r_peaks) > 1
rr = i - r_peaks(end-1)
rr_avg1 = 0.125 * rr + 0.875 * rr_avg1
rr_avg2 = 0.25 * rr + 0.75 * rr_avg2
end
qrs_threshold1 = 0.5 * signal_i + 0.5 * qrs_threshold1 % Adaptive update
search_back_window = 1.66 * rr_avg2 % For missed beats
else
noise_threshold = 0.15 * signal_i + 0.85 * noise_threshold
end
% Search back for missed beats within window if no recent QRS
elif [length](/page/Length)(r_peaks) > 0 and i - r_peaks(end) > 1.66 * rr_avg1
% [Scan](/page/Scan) back in bandpass signal for [peak](/page/Peak) > qrs_threshold2
back_peak = find_peak_in_back(bandpass_signal, i - search_back_window, i, qrs_threshold2)
if back_peak
r_peaks.append(back_peak)
end
end
return r_peaks
end
[function](/page/Function) r_peaks = pan_tompkins(ecg_signal, fs)
% Initialize buffers and thresholds
N = [length](/page/Length)(ecg_signal)
bandpass_signal = zeros(N, 1)
differentiated = zeros(N, 1)
squared = zeros(N, 1)
integrated = zeros(N, 1)
rr_avg1 = 0 % Short-term average [RR](/page/RR) interval
rr_avg2 = 0 % Long-term average [RR](/page/RR) interval
qrs_threshold1 = 0
qrs_threshold2 = 0
noise_threshold = 0
search_back_window = 0
% Step 1: Bandpass filtering (5-11 Hz, causal IIR)
for i = 1 to N
bandpass_signal(i) = apply_bandpass_filter(ecg_signal(i), previous_states)
% Low-pass: y(n) = [2*y(n-1) - y(n-2) + x(n) - 2*x(n-2) + x(n-4)] / 8 + gain adjustments
% High-pass: difference equation for slope approximation
end
% Step 2: [Differentiation](/page/Differentiation)
for i = 5 to N % Adjusted for 5-point window
differentiated(i) = [-bandpass_signal(i-2) - 2*bandpass_signal(i-1) + 2*bandpass_signal(i) + bandpass_signal(i+1)] / 8
% 5-point [derivative](/page/Derivative) approximation (causal version may use delayed samples)
end
% Step 3: Squaring
squared = differentiated .^ 2
% Step 4: Moving window integration (window width ~0.15 s, e.g., 30 samples at 200 Hz)
for i = window_width to N
integrated(i) = sum(squared(i-window_width+1 : i)) / window_width
end
% Step 5: QRS detection with adaptive thresholds and rules
r_peaks = []
for i = 1 to N
signal_i = integrated(i)
% Update thresholds based on recent peaks (refractory period ~200 ms)
if signal_i > qrs_threshold1 and i - last_rr > 200/fs
% Potential QRS: check slope and noise
if is_QRS_candidate(signal_i, bandpass_signal(i), noise_threshold)
r_peaks.append(i)
% Update RR averages and thresholds
if [length](/page/Length)(r_peaks) > 1
rr = i - r_peaks(end-1)
rr_avg1 = 0.125 * rr + 0.875 * rr_avg1
rr_avg2 = 0.25 * rr + 0.75 * rr_avg2
end
qrs_threshold1 = 0.5 * signal_i + 0.5 * qrs_threshold1 % Adaptive update
search_back_window = 1.66 * rr_avg2 % For missed beats
else
noise_threshold = 0.15 * signal_i + 0.85 * noise_threshold
end
% Search back for missed beats within window if no recent QRS
elif [length](/page/Length)(r_peaks) > 0 and i - r_peaks(end) > 1.66 * rr_avg1
% [Scan](/page/Scan) back in bandpass signal for [peak](/page/Peak) > qrs_threshold2
back_peak = find_peak_in_back(bandpass_signal, i - search_back_window, i, qrs_threshold2)
if back_peak
r_peaks.append(back_peak)
end
end
return r_peaks
end
This pseudocode follows the sequential steps from the original algorithm, with loops for each transformation and detection logic to handle adaptive thresholding and search-back.[1]
Software implementations are available in languages like MATLAB and Python for core filtering and detection. For example, a MATLAB snippet for the bandpass filter (high-pass followed by low-pass at 200 Hz) is:
matlab
% High-pass filter coefficients (for QRS slope)
b_hp = [1 -2 2 -1]; a_hp = [1]; % Simple [differentiator](/page/Differentiator) approximation
% Low-pass filter (span 12 ms, 6-sample delay at 200 Hz)
b_lp = [1 1 1 1 1 1 1 1 1 1 1 1]; a_lp = 12 * ones(1,12);
[bandpass = filter](/page/Band-pass_filter)(b_lp, a_lp, filter(b_hp, a_hp, ecg_signal));
% High-pass filter coefficients (for QRS slope)
b_hp = [1 -2 2 -1]; a_hp = [1]; % Simple [differentiator](/page/Differentiator) approximation
% Low-pass filter (span 12 ms, 6-sample delay at 200 Hz)
b_lp = [1 1 1 1 1 1 1 1 1 1 1 1]; a_lp = 12 * ones(1,12);
[bandpass = filter](/page/Band-pass_filter)(b_lp, a_lp, filter(b_hp, a_hp, ecg_signal));
In Python, the NeuroKit2 library provides a Pan–Tompkins-based R-peak detector via ecg_findpeaks with the 'pan' method, integrating the full pipeline.[12][13] Open-source examples also exist in py-ecg-detectors, which implements the original thresholding for QRS detection.[14]
For real-time execution, the algorithm employs causal filters to minimize latency, with a total delay of approximately 24 samples from filtering (6 for low-pass, 16 for high-pass, 2 for derivative), equating to about 67 ms at 360 Hz or 120 ms at 200 Hz.[1] Buffering is required for the moving-window integration (e.g., 30 samples) and search-back mechanism (up to 1.66 times the average RR interval, typically 200–400 ms), enabling low-latency processing under 50 ms in optimized high-sampling-rate variants on modern hardware.
Clinical and Research Uses
The Pan–Tompkins algorithm is widely employed in clinical settings for real-time heart rate monitoring through Holter devices, which facilitate ambulatory ECG analysis over extended periods to detect irregularities in cardiac rhythm.[15] In intensive care units (ICUs), it supports arrhythmia detection by identifying QRS complexes to reduce false alarms for conditions such as asystole, bradycardia, and tachycardia, enabling timely interventions.[16] For wearable ECG systems, modified versions of the algorithm process single-lead signals for continuous monitoring, achieving high efficiency in low-power environments suitable for remote patient care.[17][18]
In research, the algorithm serves as a benchmark for evaluating QRS detection on standard ECG datasets such as the MIT-BIH Arrhythmia Database and PhysioNet repositories, where it demonstrates sensitivity and positive predictivity exceeding 99% for normal sinus rhythms.[19][1] It is frequently utilized in over a thousand studies for feature extraction, particularly RR intervals, to train machine learning classifiers for arrhythmia identification and heartbeat categorization.[20][21] In atrial fibrillation (AFib) scenarios, while base accuracy may drop due to irregular rhythms, hybrid adaptations combining it with deep learning enhance detection rates to over 95%. As of 2025, integrations with deep learning have further improved its robustness for real-time detection in noisy wearable and telemedicine applications.[22][23][24]
The algorithm aligns with standards like ANSI/AAMI EC13 for ECG analyzers, where implementations are validated against test waveforms to ensure performance in cardiac monitoring systems.[25] Open-source tools such as the WFDB library on PhysioNet incorporate Pan–Tompkins-based detectors like ECGPUWAVE for reproducible validation and analysis of ECG data.[26] Post-1985 developments have extended its use to fetal ECG extraction from abdominal signals, with optimized thresholds improving QRS detection in noisy maternal-fetal recordings.[27] Adaptations also support stress testing protocols by handling motion artifacts in exercise-induced ECG variations.[20]
Limitations and Enhancements
Known Limitations
The Pan–Tompkins algorithm demonstrates notable sensitivity to noise, particularly in scenarios involving high-motion artifacts or poor electrode contact, as commonly encountered in ambulatory ECG monitoring. In such conditions, detection accuracy can fall below 90%, with empirical evaluations on low-quality databases simulating wearable device signals reporting rates as low as 74.49% for the original implementation due to artifacts and sudden amplitude fluctuations disrupting threshold-based decisions.[8][20]
In arrhythmia detection, the algorithm can struggle with premature beats or ventricular ectopic beats that deviate significantly in timing or morphology from normal sinus rhythm, potentially leading to false negatives.[28]
The algorithm also faces challenges with amplitude variability, struggling to detect very low-amplitude QRS complexes, despite the partial mitigation offered by the search back mechanism.[29]
Its fixed bandpass filter parameters, tuned to 5–15 Hz for enhancing adult QRS features while suppressing noise, assume standard adult ECG characteristics and perform less effectively on non-adult signals, such as fetal ECG, where QRS duration and frequency content differ significantly.[27]
Critiques of the algorithm's evaluation highlight a bias in original testing on the MIT-BIH arrhythmia database, which primarily includes clean, controlled signals yielding over 99% accuracy, whereas real-world diverse datasets show performance drops to approximately 95% or lower, underscoring limitations in generalizing to noisy, heterogeneous conditions.[19][30]
Additionally, while efficient, the algorithm's multi-stage filtering can pose challenges for ultra-low-power embedded systems without optimization.[1]
Modern Variants
Modern variants of the Pan–Tompkins algorithm have emerged since the 2000s to enhance its robustness against noise, adaptivity to varying signal conditions, and integration with advanced computational techniques, addressing challenges in real-time ECG processing for portable and multi-channel applications.[31]
Wavelet-based hybrids combine the bandpass filtering and derivative stages of Pan–Tompkins with discrete wavelet transforms (DWT) to improve feature extraction in noisy environments, such as motion artifacts in ambulatory monitoring. For instance, the Complex-Pan-Tompkins-Wavelets (CPTW) approach replaces the traditional derivative filter with a fourth-scale wavelet approximation for cross-channel beat detection, achieving higher sensitivity (up to 99.8%) on MIT-BIH arrhythmia database signals corrupted by baseline wander and muscle noise. Similarly, a 2012 hybrid method integrates multi-resolution wavelet coefficients with Pan–Tompkins thresholding to delineate QRS boundaries more accurately, reducing false positives by 15-20% in low-signal-to-noise ratio (SNR) conditions compared to the original algorithm. These enhancements leverage wavelets' time-frequency localization to better isolate QRS complexes from overlapping T-waves or artifacts.[32][33]
Machine learning integrations utilize QRS locations detected by Pan–Tompkins as fiducial points or features for subsequent classification tasks, particularly in arrhythmia detection challenges. In the PhysioNet/Computing in Cardiology Challenge 2020, hybrid models employed Pan–Tompkins-derived R-R intervals and morphological features as inputs to convolutional neural networks (CNNs) and recurrent neural networks (RNNs), such as LSTMs, enabling end-to-end 12-lead ECG classification with F1-scores exceeding 0.85 for multiple cardiac pathologies. A 2025 enhanced model combines Pan–Tompkins QRS detection with CNN-BiLSTM architectures and attention mechanisms, extracting spatiotemporal features for beat classification and achieving 99.20% accuracy on the MIT-BIH Arrhythmia Database, outperforming standalone Pan–Tompkins by mitigating errors in irregular rhythms. These approaches treat Pan–Tompkins outputs as robust anchors, feeding them into deep networks to handle variability in patient cohorts without retraining the core detector.[34][35]
Adaptive variants incorporate least mean squares (LMS) algorithms to dynamically adjust filter parameters, such as bandpass cutoffs or thresholds, in response to real-time signal variations like electrode motion or respiratory interference. A 2017 adaptive filtering framework based on LMS modifies the Pan–Tompkins differentiator stage to track changing QRS slopes, yielding high detection rates (up to 99.68% sensitivity) on noisy MIT-BIH records. This real-time adjustment via LMS error minimization reduces computational overhead while maintaining low latency, suitable for battery-constrained devices.[36]
Multi-lead extensions extend Pan–Tompkins to 12-lead ECG systems by incorporating signal quality indices (SQIs) to weight contributions from individual leads, prioritizing those with higher fidelity for collective QRS detection. A 2018 algorithm fuses outputs across leads using weighted averaging based on noise variance estimates, improving overall sensitivity to 99.5% on multi-lead PTB diagnostics database by suppressing artifacts in poor-quality channels like V1-V3 during exercise. Further, a 2025 compression-aware variant applies lead-specific Pan–Tompkins with quality-weighted thresholding, enabling sparse representation of 12-lead signals while preserving QRS fidelity for telecardiology applications.[37][38]
Recent developments up to 2025 emphasize AI-augmented versions tailored for wearables, where Pan–Tompkins serves as a lightweight preprocessor for edge-based deep learning. In ECG-enabled devices like those from Fitbit and similar platforms, hybrid AI models using Pan–Tompkins R-peak detection achieve over 99% accuracy in noisy wrist-worn data, as validated in remote monitoring studies for atrial fibrillation screening. Open-source evolutions, such as Python implementations like the py-ecg-detectors library, facilitate community-driven refinements for integration into consumer health tech. These advancements enable seamless integration into consumer health tech.[39]