Fact-checked by Grok 2 weeks ago

Sample entropy

Sample entropy (SampEn) is a statistical measure designed to quantify the and irregularity of by estimating the probability that similar patterns within the series remain similar when extended by one additional observation, while excluding self-matches to minimize bias. Developed by Joshua S. Richman and J. Randall Moorman in 2000 as a refinement of (ApEn), SampEn addresses key limitations of ApEn, such as its dependence on length and tendency to underestimate due to self-match inclusion. The measure is parameterized by m (the length of compared segments, typically 2), r (a , often 0.2 times the standard deviation of the ), and N (the total number of points), and is computed as SampEn(m, r, N) = −ln[Am(r) / Bm(r)], where Am(r) and Bm(r) represent the probabilities of matches for segments of length m+1 and m, respectively. Unlike ApEn, which includes self-matches and exhibits inconsistent relative values across varying data lengths, SampEn provides more reliable and estimates, particularly for short or noisy datasets common in real-world applications. It requires approximately half the computational effort of ApEn and aligns more closely with theoretical values, making it preferable for assessing signal regularity. These advantages stem from SampEn's exclusion of self-matches, which reduces bias toward lower scores, and its relative consistency, where SampEn(m+1, r, N) ≤ SampEn(m, r, N). SampEn has been widely applied in physiological signal analysis, such as evaluating to detect irregularities in cardiovascular and diagnose conditions like arrhythmias. Beyond biomedicine, it extends to fields including for brain signal complexity, environmental science for temperature (e.g., New York City weather patterns), for financial market irregularity, and for . Recent advancements include automated parameter selection methods, such as choosing r at the maximum of ApEn (MaxApEn) for optimal complexity detection, and extensions like cross-SampEn for measuring synchrony between paired .

Background and Motivation

Approximate Entropy

Approximate entropy (ApEn) is a statistical measure designed to quantify the regularity and of data by evaluating the predictability of patterns within the sequence. It assesses the likelihood that similar subsequences of a given will remain similar when extended by one additional point, thereby providing an indicator of the underlying system's irregularity. Developed as a practical tool for analyzing noisy, finite datasets, ApEn counts the frequency of template matches under a defined similarity criterion, offering insights into the degree of or in the data. Introduced by Steven M. Pincus in 1991, ApEn was specifically motivated by the need to evaluate physiological , such as , where traditional measures often fail to capture subtle changes in . The method emerged in the context of distinguishing between deterministic processes, , and regular patterns in biomedical signals, enabling of even with limited data points. Pincus's original formulation emphasized its applicability to diverse settings, including both clinical and experimental data, with a recommendation for at least 1000 data values to ensure reliability. The computation of ApEn begins with a time series u = \{u_1, u_2, \dots, u_N\}. Vectors of length m, known as templates, are formed as x_i^m = [u_i, u_{i+1}, \dots, u_{i+m-1}] for i = 1 to N - m + 1. The distance between any two templates x_i^m and x_j^m is calculated using the maximum absolute difference: d[x_i^m, x_j^m] = \max_{k=0}^{m-1} |u_{i+k} - u_{j+k}|. A similarity threshold r is then defined, typically set to 0.1 to 0.25 times the standard deviation of the series, to count matches where the distance is at most r. For each template x_i^m, C_i^m(r) represents the proportion of templates x_j^m (including i = j) that match within r, defined as C_i^m(r) = \frac{1}{N - m + 1} \# \{ j : d[x_i^m, x_j^m] \leq r \}. Similarly, C_i^{m+1}(r) is defined for templates of length m+1. The average log-likelihoods are then \Phi^m(r) = \frac{1}{N - m + 1} \sum_{i=1}^{N - m + 1} \ln C_i^m(r) \Phi^{m+1}(r) = \frac{1}{N - m} \sum_{i=1}^{N - m} \ln C_i^{m+1}(r) (with the denominator for \Phi^{m+1} exactly N - m, though often approximated as N - m + 1 for large N). ApEn is computed as \text{ApEn}(m, r, N) = \Phi^m(r) - \Phi^{m+1}(r) This difference captures the rate of change in the log-probability of pattern similarity from length m to m+1. Common parameter choices include m = 2 and r scaled to the data's variability. In interpretation, a higher ApEn value signifies greater irregularity and complexity in the time series, implying lower predictability and more random fluctuations, while a lower value indicates higher regularity and potential underlying order or periodicity. For instance, ApEn approaches zero for highly regular signals and a finite value (typically around 2 for standard parameters m=2 and r=0.2 \times SD) for white noise, reflecting maximal irregularity largely independent of N for sufficiently large datasets. This scale allows ApEn to differentiate subtle shifts in physiological processes, though it has inspired refinements like sample entropy to address certain estimation biases.

Development of Sample Entropy

Sample entropy (SampEn) emerged as a refinement to (ApEn), which was introduced by Pincus in as a to quantify the regularity and of short, noisy data, particularly in contexts. ApEn, however, suffered from notable limitations that compromised its reliability for such applications. A primary issue was the inclusion of self-matches in the counting of similar templates, which introduced a systematic leading to lower-than-expected ApEn values and an overestimation of signal regularity, especially in short datasets. Additionally, ApEn lacked relative consistency, meaning that if one exhibited higher than another for a given embedding dimension m, this ordering did not necessarily hold when m was increased, resulting in non-monotonic behavior that hindered reliable comparisons. These shortcomings were particularly problematic for analyzing brief recordings, where ApEn's dependence on record length often required excessively long series—over 6,900 data points in some cases—to achieve acceptable precision. To address these biases, Richman and Moorman proposed sample entropy in 2000, specifically designing it to enhance the analysis of cardiovascular and respiratory signals, such as and chest wall volume fluctuations. The core innovation of SampEn lies in its exclusion of self-matches when evaluating similarity, which eliminates the artificial inflation of matches and reduces bias toward underestimating complexity. This adjustment, combined with a template-independent, whole-series counting approach that requires only a single match for patterns of length m+1 to contribute to the entropy estimate, promotes relative consistency across parameter variations and improves discrimination in noisy environments. Computationally, SampEn is also more efficient, typically requiring about half the processing time of ApEn for equivalent evaluations. Empirical validation in the original study demonstrated SampEn's advantages through comparisons on synthetic and physiological data. For random noise series, SampEn values aligned closely with theoretical expectations for lengths as short as 100 points and tolerance parameters r ≥ 0.03, whereas ApEn deviated significantly for lengths under 1,000 and r < 0.2. In mixed periodic-random signals, SampEn consistently ranked higher-complexity mixtures above lower ones across embedding dimensions, avoiding the crossovers observed in ApEn. For real-world cardiovascular data, such as heart interbeat intervals from patients with and without heart failure, SampEn yielded systematically lower values than ApEn under identical parameters, reflecting reduced bias, and provided superior separation between healthy and pathologic groups in noisy conditions. Similarly, in respiratory signals like tidal volume series, SampEn better distinguished synchronous versus asynchronous patterns in cross-entropy analyses. These improvements established SampEn as a more robust tool for short physiological time series, later inspiring extensions like multiscale sample entropy for capturing complexity across varying temporal scales.

Core Concepts and Definition

Mathematical Definition

Sample entropy, denoted as \operatorname{SampEn}(m, r, N), quantifies the complexity of a time series of length N by estimating the negative natural logarithm of the conditional probability that two similar patterns of length m in the series remain similar (within a tolerance r) when the pattern length is extended to m+1. This measure addresses biases in related entropy statistics, such as , by excluding self-matches in the counting of similar patterns. Given a time series \{u(j): 1 \leq j \leq N\}, the embedded vectors of length m are formed as u_i^m = \{u(i + k): 0 \leq k \leq m - 1\} for i = 1 to N - m + 1. The distance between two such vectors u_i^m and u_j^m (with i \neq j) is defined as the maximum absolute difference in their corresponding components: d[u_i^m, u_j^m] = \max_{0 \leq k \leq m-1} |u(i + k) - u(j + k)|. Two vectors are considered similar if d[u_i^m, u_j^m] \leq r. The sample entropy is then given by \operatorname{SampEn}(m, r, N) = -\ln \left[ \frac{A^m(r)}{B^m(r)} \right], where B^m(r) is the average probability of two vectors matching within tolerance r for length m, and A^m(r) is the corresponding probability for length m+1. Specifically, B^m(r) = \frac{1}{N - m} \sum_{i=1}^{N-m+1} \frac{1}{N - m - 1} \# \{ j: 1 \leq j \leq N - m + 1, \, j \neq i, \, d[u_i^m, u_j^m] \leq r \}, and A^m(r) = \frac{1}{N - m} \sum_{i=1}^{N-m+1} \frac{1}{N - m - 1} \# \{ j: 1 \leq j \leq N - m + 1, \, j \neq i, \, d[u_i^{m+1}, u_j^{m+1}] \leq r \}. The summation excludes self-matches (j = i) to reduce bias, and the measure is undefined if no matches occur (i.e., if A^m(r) = 0). The parameters play key roles in the computation: m is the embedding dimension, typically set to 2 to capture short-term correlations without overfitting; r is the tolerance threshold, often chosen as 0.1 to 0.25 times the standard deviation of the time series to balance sensitivity to noise and pattern detection; and N is the length of the data, with a minimum of approximately 100 points recommended for reliable estimation, though 1000 or more points are preferable for shorter series or finer discrimination.

Key Parameters

Sample entropy relies on three primary parameters: the embedding dimension m, the tolerance r, and the data length N. These parameters significantly influence the measure's ability to quantify irregularity in time series data, with their selection requiring careful consideration to balance sensitivity, reliability, and computational feasibility. The embedding dimension m determines the length of the data vectors used to identify patterns within the time series, effectively reconstructing the phase space to capture underlying dynamics. Low values, such as m = 1 or $2, are commonly recommended for short datasets, as they suffice to detect basic patterns without requiring excessive data points; however, excessively low m can underestimate the system's complexity by failing to resolve higher-order dependencies, leading to artificially reduced entropy estimates. Higher m (e.g., 3 or more) enhances detection of intricate patterns but risks overfitting, particularly in noisy or limited data, where spurious matches inflate variance and bias results toward lower entropy. For physiological signals, m = 2 is often optimal, providing a practical trade-off. The tolerance r, typically expressed as a multiple of the standard deviation \sigma of the data (i.e., r = k \sigma), sets the threshold for considering two vectors as similar, acting as a noise filter that controls the method's sensitivity to small fluctuations. An optimal range of $0.1\sigma to $0.25\sigma balances discrimination of true patterns against noise suppression, yielding stable and interpretable entropy values; values below $0.1\sigma amplify sensitivity to minor variations, increasing estimate variance and potential overestimation of irregularity, while r > 0.25\sigma reduces resolution, causing underestimation by grouping dissimilar patterns and diminishing the measure's ability to detect subtle changes. In applications like surface electromyography, narrower ranges (e.g., $0.13\sigma to $0.45\sigma for m = 2) may apply depending on signal characteristics. The data length N represents the total number of points in the and is crucial for ensuring statistical reliability, as sample entropy estimates the probability of pattern matches, which becomes unreliable with insufficient samples. A minimum N > 10^m (or ideally $10^m to $20^m) is required for robust computation, particularly to avoid undefined values when no matching vectors are found (resulting in logarithmic singularities); shorter series (N < 100) introduce bias toward higher entropy due to sparse sampling, while N \geq 250 is recommended for physiological data to achieve low relative error. These parameters are interdependent, with trade-offs dictating practical choices: increasing m necessitates proportionally larger N and potentially adjusted r to maintain match probabilities and prevent bias, as higher-dimensional embeddings dilute similarity counts unless r is scaled accordingly. For physiological time series, Richman and Moorman (2000) provide guidelines emphasizing m = 1–$2, r = 0.1\sigma–$0.5\sigma, and N \geq 250 to ensure consistent complexity assessment across noisy, irregular signals. In multiscale sample entropy, fixed m and r are applied to coarse-grained series, where effective N decreases at higher scales, amplifying sensitivity to parameter choices.

Computation Methods

Algorithmic Steps

The computation of sample entropy (SampEn) from a time series involves a systematic procedure that quantifies the likelihood of pattern similarity at consecutive embedding dimensions, excluding self-matches to reduce bias. The algorithm requires a time series of length N, denoted as \{x_1, x_2, \dots, x_N\}, and two key parameters: the embedding dimension m (typically 2) and the tolerance r (often 0.2 times the standard deviation of the series). The first step forms m-dimensional vectors from the time series. Define u_i^m = [x_i, x_{i+1}, \dots, x_{i+m-1}] for i = 1 to M = N - m + 1. These vectors represent embedded templates of length m. The distance between two such vectors is the Chebyshev distance: d(u_i^m, u_j^m) = \max_{k=0}^{m-1} |u_{i+k} - u_{j+k}|. Next, compute B^m(r), which estimates the probability of two vectors matching within tolerance r at dimension m. For each i = 1 to M, count the number of j \neq i (with j = 1 to M) such that d(u_i^m, u_j^m) \leq r; denote this count as C_i^m(r). Then, B^m(r) = \frac{1}{M} \sum_{i=1}^M \frac{C_i^m(r)}{M-1}, where the division by M-1 normalizes by the number of possible non-self matches, and the outer average provides the overall frequency (approximating the denominator as N-m for large N). This excludes self-matches to avoid artificial regularity. Repeat the process for dimension m+1 to obtain A^m(r). Form (m+1)-dimensional vectors u_i^{m+1} = [x_i, x_{i+1}, \dots, x_{i+m}] for i = 1 to M' = N - (m+1) + 1. For each i = 1 to M', count j \neq i (up to M') where d(u_i^{m+1}, u_j^{m+1}) \leq r, yielding C_i^{m+1}(r), and compute A^m(r) = \frac{1}{M'} \sum_{i=1}^{M'} \frac{C_i^{m+1}(r)}{M'-1}. Equivalently, A^m(r) can be derived from the m-dimensional matches by additionally checking if |x_{i+m} - x_{j+m}| \leq r. Finally, sample entropy is calculated as \operatorname{SampEn}(m, r, N) = -\ln \left( \frac{A^m(r)}{B^m(r)} \right). If A^m(r) = 0 (indicating no matches at dimension m+1) and B^m(r) > 0, \operatorname{SampEn}(m, r, N) is infinite, reflecting maximal irregularity. If B^m(r) = 0, the value is undefined. For clarity, the algorithm can be outlined in pseudocode:
Input: time series x[1..N], parameters m, r
M = N - m + 1
Initialize B = 0, A = 0

// Compute B^m(r)
for i = 1 to M
    count_B = 0
    for j = 1 to M (j ≠ i)
        dist = max_{k=0 to m-1} |x[i+k] - x[j+k]|
        if dist ≤ r
            count_B += 1
    B += count_B / (M - 1)
B = B / M

// Compute A^m(r) similarly for m+1
M_prime = N - m
for i = 1 to M_prime
    count_A = 0
    for j = 1 to M_prime (j ≠ i)
        dist_m1 = max_{k=0 to m} |x[i+k] - x[j+k]|
        if dist_m1 ≤ r
            count_A += 1
    A += count_A / (M_prime - 1)
A = A / M_prime

// Note: If A == 0 and B > 0, SampEn = +∞ (maximal irregularity)
// If B == 0, [undefined](/page/Undefined)
SampEn = -[log](/page/Log)(A / B)

Output: SampEn
This implementation has O((N-m)^2), suitable for moderate-length series. For multiscale sample entropy, the input series is first coarse-grained via averaging into non-overlapping windows before applying these steps.

Handling Short Time Series

When analyzing short time series with sample entropy, the limited data length N introduces notable computational challenges, primarily an increase in estimation variance arising from the non-independence of overlapping templates and the of zero matches between vectors, which results in a zero denominator and renders the entropy value . A general guideline recommends a minimum N \approx 10^m, where m is the embedding dimension, implying N \geq 100 for typical m=2; yet, in fields like physiological , datasets frequently have N < 300, such as in heart rate variability or center-of-pressure recordings. To mitigate these issues, practitioners often report sample entropy as undefined in cases of zero denominator and apply adjustments like bias-reduced counting (inherent to the method's exclusion of self-matches) or bootstrapping to derive confidence intervals and quantify variability, which is particularly effective for segments as short as 5,000 beats in ECG data. Simulations demonstrate that relative errors can exceed 30% for N=15 but drop below 3% for N > 100 with tolerance r \geq 0.03 times the signal standard deviation. Empirical studies establish reliability thresholds, such as N \geq 200 for m=2 and r=0.2 times the deviation, where maximum errors remain under 5% for N > 240 in randomized data with probability around 0.368; shorter lengths like N=600 yield consistent but less discriminative results in balance-related signals compared to N=1,200. For extremely short series, fallback measures include , which tolerates smaller N albeit with higher bias, or permutation entropy, designed for brevity.

Multiscale Extensions

Principles of Multiscale Analysis

Single-scale sample entropy primarily quantifies short-term irregularities and correlations in time series data, but it fails to capture the long-range that characterize complex systems, such as physiological signals. This limitation arises because single-scale analysis does not account for the multiple temporal scales inherent in healthy physiologic processes, often assigning higher entropy to , uncorrelated noise than to structured healthy signals with persistent correlations. To address this, the multiscale entropy concept was introduced by Costa et al. in , enabling the assessment of signal complexity across a range of temporal scales \tau by generating successive coarse-grained from the original data. This method evaluates how varies with scale, providing a fuller picture of the underlying dynamics in systems where information is distributed over multiple resolutions. The core of this approach is the coarse-graining procedure, which averages non-overlapping segments of the original to produce scaled versions that emphasize longer-term trends while filtering out finer fluctuations. For a \{x_i\}_{i=1}^N, the coarse-grained series \{y_j^{(\tau)}\} at scale factor \tau is constructed as follows: y_j^{(\tau)} = \frac{1}{\tau} \sum_{i=(j-1)\tau + 1}^{j\tau} x_i, \quad j = 1, 2, \dots, \left\lfloor \frac{N}{\tau} \right\rfloor This process reduces the effective length of the series to \lfloor N / \tau \rfloor, simulating a lower sampling rate that retains long-term correlations. The primary motivation for multiscale analysis lies in its ability to reveal differences in complexity between healthy and diseased physiological systems; for instance, heart rate variability in healthy individuals exhibits elevated entropy across multiple scales due to adaptive, long-range correlations, whereas pathologic conditions show diminished complexity at coarser scales, reflecting impaired regulation.

Multiscale Sample Entropy Formulation

Multiscale sample entropy (MSE) extends sample entropy to analyze the complexity of across multiple temporal s by applying the standard SampEn to successively coarse-grained versions of the original series. For a given factor \tau, the coarse-grained series y_j^{(\tau)} is formed by averaging the original data points in non-overlapping windows of \tau, resulting in a reduced effective N_\tau = \lfloor N / \tau \rfloor, where N is the of the original series. Then, MSE(\tau) is computed as the SampEn of this coarse-grained series, using the embedding m (typically fixed at 2) and tolerance r adjusted to $0.15 \times \sigma_{y^{(\tau)}}, where \sigma_{y^{(\tau)}} is the standard deviation of y^{(\tau)}. The adaptation of the SampEn formula to each coarse-grained series y^{(\tau)} follows the standard definition but accounts for the shortened length N_\tau, which decreases as \tau increases and can affect the reliability of entropy estimates for large \tau. Specifically, the conditional probability ratio in SampEn is evaluated over templates of length m and m+1 within the coarse-grained data, with the logarithm yielding MSE(\tau) = -\ln \left( A^{m+1}(r) / B^m(r) \right), where A^{m+1}(r) and B^m(r) are the probabilities of matches within tolerance r for patterns of length m+1 and m, respectively, now based on N_\tau points. This scaling of r by the local standard deviation ensures consistency in similarity criteria across scales, preserving the measure's sensitivity to relative irregularities. To obtain the full MSE profile, SampEn is calculated for \tau = 1 (the original series) up to a maximum scale \tau_{\max} (commonly 20 for series with N \approx 10^4 to $10^5), ensuring N_\tau \geq 10 to maintain statistical validity of the entropy estimate. The resulting values are plotted as MSE(\tau) versus \tau, revealing scale-dependent complexity: for instance, healthy physiological signals often exhibit higher entropy at multiple scales compared to pathologic ones, visualizing how long-range correlations influence overall irregularity. Variants of MSE address limitations in the original non-overlapping coarse-graining, which can introduce variance in estimates by discarding rapid fluctuations and reducing data points sharply at higher scales. The refined MSE (RMSE) modifies the procedure by incorporating overlapping windows or filtering to refine the non-overlapping bins, thereby reducing estimation variance and while better preserving fast-scale ; for example, using a over overlapping segments averages multiple coarse-grained realizations per scale, leading to more stable SampEn values. The choice between original non-overlapping MSE and refined overlapping variants impacts variance, with refined methods showing lower variability in applications to short or noisy series, though at the cost of increased computational demand.

Properties and Interpretations

Sample entropy (SampEn) addresses key limitations of its precursor, (ApEn), primarily through the exclusion of self-matches in template comparisons, which reduces estimation bias and yields systematically higher entropy values. This bias reduction in SampEn enhances the separation between regular and irregular , providing more reliable than ApEn, where self-matches inflate regularity estimates in short datasets. Additionally, SampEn exhibits relative consistency, satisfying the property that SampEn(m+1, r, N) ≤ SampEn(m, r, N) for dimensions m, r, and data length N, unlike ApEn's frequent non-monotonic behavior that undermines comparability across conditions. The multiscale extension of sample entropy (MSE) further amplifies these strengths by evaluating complexity across multiple temporal scales, enabling distinctions between healthy and pathologic physiological dynamics that single-scale SampEn often fails to capture. For instance, MSE reveals higher curve areas under the entropy profile for healthy aging processes compared to pathologic states like congestive , reflecting preserved multiscale complexity in adaptive systems. Compared to other entropy measures, SampEn demonstrates greater robustness to , performing reliably even with moderate observational errors that render Kolmogorov-Sinai entropy estimates unstable. Unlike Shannon entropy, which requires an explicit and examines data points independently, SampEn is distribution-free, relying instead on pattern similarities within the for a more direct assessment of serial irregularity.

Limitations and Considerations

Sample entropy (SampEn) exhibits significant sensitivity to its key parameters—embedding dimension m, r, and data length N—with no universally optimal values, necessitating domain-specific tuning to ensure reliable results. For instance, increasing m or r generally decreases SampEn values, but the exact impact varies by signal characteristics, such as levels or underlying , potentially leading to inconsistent interpretations across studies if parameters are not matched. This parameter dependence arises because SampEn relies on within the tolerance window, where suboptimal choices can either overlook subtle patterns (small r) or mask differences (large r), as demonstrated in analyses of physiological like center-of-pressure data. A prominent limitation is the and high variance in short time series, particularly when N < 300, where overlapping templates reduce the effective number of independent matches, inflating uncertainty and underestimating entropy for regular signals. In multiscale sample entropy (MSE), this issue intensifies at larger scales \tau, as the coarse-grained series length N_\tau = N / \tau diminishes; when N_\tau < m + 1, entropy estimates become undefined or highly unreliable due to insufficient data points for pattern detection. These constraints highlight SampEn's unsuitability for brief recordings common in real-time biomedical monitoring, often requiring data augmentation or alternative metrics to mitigate variance. Interpretively, SampEn primarily quantifies signal irregularity rather than intrinsic complexity, yielding high values for both structured chaotic processes and random noise, which can mislead analyses in noisy environments by conflating unpredictability with sophistication. For example, white noise consistently produces elevated SampEn, while periodic signals with superimposed noise may appear more "complex" than intended, underscoring the need for complementary measures like spectral analysis to distinguish irregularity from true dynamical richness. In non-stationary data, such as trending physiological signals, SampEn often overestimates irregularity due to inflated standard deviation affecting r, compromising validity without preprocessing like detrending or windowing. Post-2000 critiques have emphasized these challenges, noting SampEn's quadratic computational complexity O(N^2) as a barrier for large datasets, prompting optimizations or alternatives like , which offers robustness to non-stationarity and parameter sensitivity while maintaining low overhead. PE, by focusing on ordinal patterns rather than amplitude, avoids embedding dimension issues and provides consistent complexity estimates in short or noisy series, making it preferable in scenarios where SampEn's assumptions fail, such as real-time neural signal processing. Despite SampEn's consistency advantages over , these limitations underscore the importance of contextual validation in applications.

Applications

Biomedical and Physiological Signals

Sample entropy (SampEn) has been extensively applied to heart rate variability (HRV) analysis to quantify the complexity of cardiac dynamics, revealing reduced physiological adaptability in pathological conditions. In patients with congestive heart failure (CHF), SampEn values are significantly lower compared to healthy individuals, indicating diminished HRV irregularity and increased predictability of heartbeats, which correlates with disease severity and poorer prognosis. Multiscale sample entropy (MSE), an extension that assesses complexity across temporal scales, further demonstrates scale-specific losses in HRV among aging populations; for instance, healthy young adults exhibit higher entropy at larger scales than elderly subjects or those with CHF, highlighting progressive deterioration in long-range correlations. In electroencephalography (EEG) signals, SampEn and MSE provide insights into neural complexity alterations in neurological disorders. For Alzheimer's disease (AD), MSE reveals decreased entropy at short temporal scales (e.g., scales 1–5) in frontal brain regions, reflecting disrupted local neural interactions, while longer scales show compensatory increases that correlate negatively with cognitive impairment as measured by Mini-Mental State Examination scores. During anesthesia, SampEn of EEG signals decreases progressively with deepening levels of sedation under agents like sevoflurane and isoflurane, effectively tracking the transition to unconsciousness and detecting burst suppression patterns with high prediction probability (Pk ≈ 0.8). Applications extend to respiratory signals, where SampEn of inter-breath intervals aids in detecting obstructive sleep apnea (OSA) by identifying reduced variability during apneic events compared to normal breathing patterns. Post-2010 studies have utilized SampEn trends in physiological signals for sepsis prognosis; for example, lower sample entropy in oxygen saturation (SpO₂) time series is associated with increased mortality risk in critically ill septic patients, serving as an independent predictor of survival when combined with clinical scores like SOFA (AUC ≈ 0.70). Similarly, MSE applied to heart rate and blood pressure dynamics enables early sepsis detection up to four hours prior to clinical onset, with entropy features enhancing predictive models (AUROC up to 0.78). Clinically, SampEn has contributed to HRV analysis tools for diagnostic purposes, assessing autonomic function in cardiovascular risk stratification. In the 2020s, SampEn from HRV has emerged as a biomarker in wearable technologies for real-time stress monitoring, leveraging photoplethysmography-derived signals to quantify acute vagal responses and emotional states with machine learning integration.

Engineering and Physical Systems

Sample entropy has been applied to vibration signals in mechanical systems for early fault detection in rolling bearings, where increasing entropy values indicate growing irregularity as damage progresses toward failure. In analyses of bearing vibration data, sample entropy effectively distinguishes healthy from faulty states by quantifying the loss of predictability in the signals, with higher entropy observed in deteriorating components prior to complete breakdown. For instance, studies on roller bearings demonstrate that sample entropy features extracted from vibration signals enable accurate classification of fault severities, outperforming traditional statistical methods in noisy environments. Multiscale sample entropy extends this to turbine monitoring, capturing multi-frequency noise components to detect subtle degradations in rotating machinery like wind turbines, where coarse-graining reveals entropy changes across scales indicative of emerging faults. In financial time series analysis, sample entropy serves as a measure of market irregularity and volatility, with elevated values signaling periods of heightened unpredictability. Applications to stock exchange indices show that sample entropy increases during crisis events, such as the , reflecting greater randomness in price fluctuations compared to stable market phases. This entropy-based approach has been used to model and predict volatility in international markets, providing insights into the complexity of economic signals beyond simple variance metrics. In physical systems, modified multiscale sample entropy has been employed in laser speckle contrast imaging to assess signal complexity related to dynamic processes like fluid flow, with 2015 studies adapting the algorithm for two-dimensional image analysis to improve perfusion monitoring accuracy. For acoustic signal classification, hierarchical sample entropy variants analyze ship-radiated noise, enabling discrimination between vessel types and operational states through entropy-based feature extraction that captures hierarchical structures in underwater sound patterns, as demonstrated in post-2020 classification frameworks. Emerging applications include chaos detection in control systems, where sample entropy quantifies the transition from periodic to chaotic regimes in nonlinear dynamics, aiding stability analysis in engineering feedback loops like semiconductor lasers with optical feedback. In climate data processing, sample entropy evaluates the complexity of weather patterns post-2010, such as in global temperature and rainfall time series, revealing decreases in entropy that suggest shifts toward more ordered, potentially extreme regimes in El Niño forecasting and radiative balance studies.

Practical Implementation

Numerical Algorithms

The standard algorithm for computing sample entropy involves forming overlapping embedding vectors of length m and m+1 from a time series of length N, calculating the maximum distance between all pairs of these vectors, and counting the number of pairs within a tolerance r to estimate the ratios A and B, yielding SampEn(m, r, N) = -\ln(A/B). This pairwise distance computation results in a time complexity of O(N^2), as it requires evaluating up to [N(N-1)]/2 distances, making it suitable for time series with N < 5000 on standard hardware without excessive runtime. To address the quadratic complexity for longer series, optimizations leverage data structures and efficient implementations. Algorithms using k-d trees for nearest-neighbor searches reduce the time complexity to O(N \log N) by organizing embedding vectors in a balanced tree and pruning unnecessary distance calculations during matching. Vectorized implementations in languages like MATLAB and Python exploit array operations to parallelize distance computations within the CPU, achieving speedups of 10-100x over naive loops for moderate N, particularly when using libraries such as NumPy for broadcasting pairwise differences. Sliding window techniques, inherent to the overlapping embedding process, can be further optimized for incremental updates in streaming or windowed analyses, though they retain O(N^2) per window unless combined with approximate matching. Parallelization enhances scalability for large datasets. Multithreaded approaches divide the similarity matching across CPU cores, using dynamic task scheduling to balance workloads and achieve speedups of up to 6.7x on 12 threads for signals up to N=10^5, while preserving the underlying O(N^2) complexity but reducing wall-clock time through concurrency. GPU acceleration via OpenCL or CUDA parallelizes the distance and counting operations across thousands of threads, yielding 10-50x speedups for N > 10^4 compared to single-threaded CPU execution, ideal for high-throughput applications. For multiscale sample entropy, efficiency is improved by precomputing coarse-grained series at each scale factor (e.g., averaging every points to reduce effective length to N/\tau) before applying the entropy calculation, avoiding redundant embeddings across scales. Numerical stability requires careful handling of edge cases. When r approaches zero, few or no vector pairs match, leading to A = B = 0 and undefined \ln(A/B); thus, r should be at least 0.1 times the signal standard deviation to ensure 10-20 matches on average for reliable estimation. Floating-point precision issues arise in the logarithm of ratios near 1 (low ) or 0 (high ), potentially amplifying errors in double-precision arithmetic; mitigation involves using higher-precision libraries or clamping extreme ratios, though standard 64-bit floats suffice for most physiological signals with N \geq 100.

Available Software Tools

Several software tools and libraries facilitate the computation of sample entropy (SampEn) and its multiscale variants, catering to researchers in and physiological . These implementations vary in accessibility, from integrated functions in popular environments to open-source packages on repositories like , enabling efficient calculation without deriving algorithms from scratch. For multiscale sample entropy (MSE), the open-source EntropyHub toolbox provides comprehensive functions, including refined composite MSE, with support for univariate and multivariate data; it requires the Signal Processing Toolbox for full functionality and was updated to version 2.0 in 2024, incorporating optimizations for larger datasets. Additional File Exchange contributions, such as the Multiscale Sample Entropy implementation, offer coarse-graining methods based on seminal MSE formulations. Python users can leverage libraries like nolds, a NumPy-based package that implements sample entropy for one-dimensional time series, emphasizing educational resources alongside computation. The AntroPy package provides time-efficient algorithms for SampEn and approximate entropy, suitable for EEG and physiological time-series analysis. EntropyHub's Python interface extends these to multiscale and cross-entropy variants, with open-source code available on GitHub for customization and examples. NeuroKit2 includes MSE functions, supporting composite and refined variants for biomedical applications. R packages such as nonlinearTseries offer the sampleEntropy function for univariate time series, returning entropy values across embedding dimensions and tolerances, with integration for nonlinear time-series analysis. The fractal package computes SampEn as part of fractal dimension and complexity measures, while a 2019 comparative study highlights pracma and tseriesChaos as alternatives for multiscale extensions, noting their handling of parameter sensitivity. For embedded systems, C++ implementations like the PhysioNet Sample Entropy Estimation toolkit provide portable code for real-time computation on resource-constrained devices, originally developed in 2004 and verified against equivalents. Java adaptations are less common but can be derived from C++ cores for cross-platform use. Online tools remain limited for SampEn, with PhysioNet offering downloadable executables rather than web-based calculators; however, in (2024), EntropyHub includes expanded multivariate MSE functions, enhancing accessibility for multi-channel .

References

  1. [1]
    Physiological time-series analysis using approximate entropy and ...
    Physiological time-series analysis using approximate entropy and sample entropy. Authors: Joshua S. Richman and J. Randall MoormanAuthors Info & Affiliations.
  2. [2]
    Approximate Entropy and Sample Entropy: A Comprehensive Tutorial
    Richman and Moorman defined the SampEn statistic to analyze the randomness of a series through the direct use of the correlation integrals defined by ...
  3. [3]
    Approximate entropy as a measure of system complexity. - PNAS
    Mar 15, 1991 · ApEn can classify complex systems, given at least 1000 data values in diverse settings that include both deterministic chaotic and stochastic processes.
  4. [4]
  5. [5]
    Approximate Entropy and Sample Entropy: A Comprehensive Tutorial
    Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns.
  6. [6]
    Influence of Parameter Selection in Fixed Sample Entropy of Surface ...
    The aims of the present study were to evaluate the influence of the embedding dimension m, the tolerance value r, the size of the moving window, and the ...
  7. [7]
  8. [8]
    The appropriate use of approximate entropy and sample ... - NIH
    Approximate entropy (ApEn) and sample entropy (SampEn) are mathematical algorithms created to measure the repeatability or predictability within a time ...
  9. [9]
    On the use of approximate entropy and sample entropy with centre ...
    Dec 12, 2018 · This study aimed to investigate the effects of changing parameters m, r and N on ApEn and SampEn values in COP time-series, as well as the ability of these ...
  10. [10]
    None
    ### Summary: Bootstrapping Sample Entropy for Short Time Series (1-Hour ECG Segments)
  11. [11]
    [PDF] [11] Sample Entropy - WaveMetrics
    Richman and J. R. Moorman, Am. J. Physiol. 278, H2039 (2000). Fig. 1. Schematic demonstration of entropy estimation using approximate entropy. ( ...
  12. [12]
  13. [13]
    The Multiscale Entropy Algorithm and Its Variants: A Review - MDPI
    Multiscale entropy (MSE) analysis was introduced in the 2002 to evaluate the complexity of a time series by quantifying its entropy over a range of temporal ...
  14. [14]
  15. [15]
    Multiscale entropy analysis of complex physiologic time series
    We introduce a method to calculate multiscale entropy (MSE) for complex time series. We find that MSE robustly separates healthy and pathologic groups.
  16. [16]
    Multiscale entropy: A tool for understanding the complexity of ...
    Multiscale entropy (MSE) has been used to identify differences in fluctuations of postural CoP time series between groups with and without known physiological ...
  17. [17]
    Sample Entropy Computation on Signals with Missing Values - MDPI
    Aug 19, 2024 · Sample entropy embeds time series into m-dimensional spaces and estimates entropy based on the distances between points in these spaces.Missing: underestimates | Show results with:underestimates
  18. [18]
    Evaluation of physiologic complexity in time series using ...
    Oct 17, 2012 · One of the most used, sample entropy (SampEn), is known to be a irregularity metric and an increase on its value is not always associated to an ...Missing: interpretive | Show results with:interpretive
  19. [19]
    Fast computation of sample entropy and approximate entropy in ...
    Pincus introduced approximate entropy, a set of measures of system complexity closely related to entropy, which is easily applied to biomedical signals and ...
  20. [20]
    Permutation Entropy and Its Main Biomedical and Econophysics ...
    Permutation entropy, on the contrary, is extremely fast and robust, and seems particularly advantageous when there are huge data sets and no time for ...Permutation Entropy And Its... · 4. Biomedical Applications · 5. Econophysics ApplicationsMissing: critiques | Show results with:critiques<|separator|>
  21. [21]
    Characterization of Heart Rate Variability loss with aging and heart ...
    Entropy based measures, such as Sample Entropy (SampEn), have been widely used for quantifying the Heart Rate Variability (HRV) for cardiac risk ...
  22. [22]
  23. [23]
    Assessment of EEG dynamical complexity in Alzheimer's disease ...
    Multiscale entropy (MSE) is a recently proposed entropy-based index of physiological complexity, evaluating signals at multiple temporal scales.
  24. [24]
    EEG entropy measures in anesthesia - Frontiers
    In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP).Abstract · Introduction · Entropy Indices · Materials and Statistical Methods
  25. [25]
    Development and validation of a sample entropy-based method to ...
    Aug 17, 2020 · Sample entropy (SE) is calculated for different values of the embedding dimension (m) and tolerance (r). An 8-period-long exponential moving ...
  26. [26]
  27. [27]
  28. [28]
    FDA Clears the Inmedix® CloudHRV™ System for Accurate Heart ...
    Jan 23, 2025 · This is the first system cleared by the FDA as a cloud-based, heart rate variability (HRV) diagnostic calculated from a high-fidelity 5-min ...
  29. [29]
    Photoplethysmography-based HRV analysis and machine learning ...
    Apr 3, 2025 · Photoplethysmography-based HRV analysis and machine learning for real-time stress quantification in mental health applications
  30. [30]
  31. [31]
  32. [32]
    Multiscale Entropies — EntropyHub 2.0 documentation
    Functions for estimating the multiscale entropy of a univariate time series. . Multiscale entropy can be calculated using any of the Base Entropies: ApEn ...
  33. [33]
    EntropyHub - File Exchange - MATLAB Central - MathWorks
    EntropyHub provides a comprehensive set of functions to estimate nonlinear dynamic and information theoretic entropy statistics from time series and image data.
  34. [34]
    Multiscale Sample Entropy - File Exchange - MATLAB Central
    Multiscale Sample Entropy computes the sample entropy of a signal, based on 'Multiscale entropy analysis of biological signals'.
  35. [35]
    nolds module — Nolds 0.6.2 documentation - GitHub Pages
    Computes the sample entropy of the given data. Explanation of the sample entropy: The sample entropy of a time series is defined as the negative natural ...
  36. [36]
    AntroPy: entropy and complexity of (EEG) time-series in Python
    AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series.
  37. [37]
    Python Functions: — EntropyHub 2.0 documentation
    EntropyHub functions fall into 8 categories: ; Base Entropies: · Approximate Entropy. ApEn. Sample Entropy. SampEn ; Cross Entropies: · Cross-Approximate Entropy.
  38. [38]
    EntropyHub - PyPI
    EntropyHub functions fall into 8 categories: Base functions for estimating the entropy of a single univariate time series.
  39. [39]
    Source code for neurokit2.complexity.entropy_multiscale
    (2002) proposed the multiscale entropy (MSEn), which computes sample entropies at multiple scales. The conventional MSEn algorithm consists of two steps: 1 ...Missing: PyUniVarTS | Show results with:PyUniVarTS
  40. [40]
    sampleEntropy function - RDocumentation
    The nlOrder function returns the order of the sample entropy. The radius function returns the radius on which the sample entropy function has been evaluated.
  41. [41]
    sampleEntropy: Sample Entropy (also known as Kolgomorov-Sinai ...
    The Sample Entropy measures the complexity of a time series. Large values of the Sample Entropy indicate high complexity whereas that smaller values ...
  42. [42]
    A comprehensive comparison and overview of R packages for ... - NIH
    We have explored the functions of five existing R packages for calculating sample entropy and have compared their computing capability in several dimensions.
  43. [43]
    Sample Entropy Estimation v1.0.0 - PhysioNet
    Nov 8, 2004 · Sample Entropy is a tool for investigating the dynamics of heart rate and other time series. Sample Entropy is the negative natural logarithm of an estimate.
  44. [44]
    Sample Entropy estimation using sampen - PhysioNet
    Sample Entropy is a useful tool for investigating the dynamics of heart rate and other time series.Missing: online 2025
  45. [45]
    Latest Updates — EntropyHub 2.0 documentation
    With EntropyHub v2.0, you can now apply many multivariate and multivariate multiscale methods with the ease, robust functionality and extensive documentation