Fact-checked by Grok 2 weeks ago

CUSUM

The Cumulative Sum (CUSUM) is a sequential statistical technique used to monitor processes and detect small, persistent shifts in their parameters, such as the mean, by accumulating deviations from a target value over time. Developed by E. S. Page in 1954 as a method for continuous inspection schemes in , it addresses limitations of earlier Shewhart control charts by providing greater sensitivity to gradual changes rather than abrupt ones. CUSUM operates by computing two cumulative sums—one for upward shifts (CUSUM for increases) and one for downward shifts (CUSUM for decreases)—using the S_i^+ = \max(0, S_{i-1}^+ + ( \bar{x}_i - \mu_0 - k )) and S_i^- = \max(0, S_{i-1}^- + ( \mu_0 - k - \bar{x}_i )), where \bar{x}_i is the sample mean, \mu_0 is the target mean, and k is a reference value typically set to half the shift size to detect (often 0.5 standard deviations). These sums are plotted against time; if either exceeds a decision h (commonly 4 or 5 standard deviations), it signals an out-of-control . The method can be implemented in tabular form for precise monitoring or via a V-mask overlay, a visual aid introduced by G. A. Barnard in to interpret drifts. Originally applied in industrial to improve detection of shifts as small as 1 standard deviation—where Shewhart charts require larger samples—CUSUM has since expanded to diverse fields including healthcare, , and . In surgical , for instance, risk-adjusted CUSUM charts track cumulative outcomes like complication rates or mortality, rewarding acceptable performance and penalizing deviations to enable intervention without fixed sample sizes. Its advantages include enhanced power for small shifts (e.g., average run length reductions from hundreds to tens of samples) and adaptability to various data distributions through modifications like standardized or weighted variants.

Fundamentals

Definition and Purpose

The Cumulative Sum (CUSUM) is a sequential statistical designed to monitor by calculating and plotting the cumulative sum of deviations between observed values and a target . This accumulation allows for the detection of gradual or small shifts in the over time, making it particularly useful in applications where maintaining consistent performance is critical. The primary purpose of the CUSUM chart is to enable early identification of persistent changes in a , which might otherwise go unnoticed in standard monitoring methods focused on individual outliers. By emphasizing sustained deviations rather than isolated anomalies, it supports proactive adjustments to prevent defects and ensure product quality, especially in high-volume production environments. Developed by E. S. Page in 1954 as part of continuous inspection schemes, the CUSUM method draws inspiration from the sequential probability ratio test (SPRT), adapting its principles for ongoing data analysis. Beyond manufacturing, it serves as a tool for change point detection in diverse areas, including environmental monitoring and financial time series analysis. CUSUM monitoring can be configured as one-sided to target shifts in a single direction, such as increases or decreases, or two-sided to detect changes in either direction without altering the core accumulation process.

Historical Development

The cumulative sum (CUSUM) technique was first developed by E. S. Page in his 1954 paper "Continuous Inspection Schemes," published in , where he proposed it as a method for monitoring process shifts in industrial . Page's work was directly inspired by Abraham Wald's , which had been formulated during to optimize decision-making under uncertainty in military applications, such as quality inspection of munitions. This foundational contribution shifted focus from traditional Shewhart charts, which were less sensitive to small changes, toward cumulative approaches that accumulate evidence over time for more efficient detection. In 1959, G. A. Barnard introduced the V-mask as a graphical aid for interpreting CUSUM charts, allowing practitioners to visualize decision boundaries for signaling process deviations by overlaying a V-shaped mask on the cumulative sum plot. Barnard's innovation, detailed in his paper "Control Charts and Stochastic Processes" in the Journal of the Royal Statistical Society, Series B, enhanced the practicality of CUSUM for both upward and downward shifts, making it more accessible for routine use in and . By the , CUSUM gained widespread adoption in practices, as evidenced by its integration into statistical process monitoring discussions and applications in industries like chemicals and , building on early implementations in processes. Extensions and standardization efforts in the late 1970s and 1980s included its recognition in guidelines, with further formalization in international standards such as ISO 7870-4 (2011) for cumulative sum charts. Post-2000 developments have emphasized computational integration for real-time monitoring, with CUSUM algorithms embedded in statistical software like to enable automated charting and analysis in dynamic environments. Recent adaptations in the 2020s include hybrids combining CUSUM with techniques, such as adjusted sequential CUSUM models for enhanced in high-dimensional data, though these build directly on the core framework without altering its foundational principles.

Mathematical Foundations

The CUSUM Statistic

The cumulative sum (CUSUM) forms the basis of the CUSUM , accumulating sequential deviations from a target process to enhance sensitivity to small, sustained shifts. Introduced by as a sequential testing procedure, it computes two one-sided s: an upper to detect increases in the and a lower to detect decreases. The upper CUSUM is given by S_i^{\upper} = \max\left(0, S_{i-1}^{\upper} + (\bar{x}_i - \mu_0) - k \right), where \bar{x}_i denotes the i-th sample mean from the process (or individual observation x_i when monitoring individuals), \mu_0 is the target (in-control) mean, and k > 0 is the reference value that determines the magnitude of shift to which the chart is tuned. The reference value is commonly chosen as k = \frac{\delta \sigma}{2}, with \delta representing the standardized shift size to detect (often around 1 or 2) and \sigma the standard deviation of the sample mean (or process standard deviation for individuals). Symmetrically, the lower CUSUM statistic is S_i^{\lower} = \max\left(0, S_{i-1}^{\lower} + (\mu_0 - \bar{x}_i) - k \right), which can be rewritten as S_i^{\lower} = \max\left(0, S_{i-1}^{\lower} - (\bar{x}_i - \mu_0) - k \right). This formulation ensures that deviations below the target accumulate only when they exceed the reference value k in magnitude, mirroring the upper statistic's for positive deviations. Both statistics are initialized at S_0^{\upper} = S_0^{\lower} = 0. The CUSUM statistic assumes that the sample means \bar{x}_i (or observations x_i) are independent and identically distributed according to N(\mu, \sigma^2) with known variance \sigma^2. For processes with unknown variance, the data can be standardized by dividing the deviations by an estimate of \sigma, allowing the same recursive formulas to apply in standardized units where k and shifts are expressed relative to \sigma.

Performance Measures

The primary performance measure for the CUSUM control chart is the average run length (ARL), which quantifies the expected number of samples taken before the chart signals an out-of-control condition. The in-control ARL (ARL_0) is the average run length under stable process conditions and is typically targeted at approximately 370 for standard parameters such as a reference value k = 0.5 and decision interval h \approx 4.77, providing a false alarm rate comparable to that of a Shewhart \bar{X} chart with 3\sigma limits. This value ensures the chart maintains a low rate of unnecessary signals while monitoring ongoing processes. The out-of-control ARL (ARL_1) assesses the chart's ability to detect process shifts of standardized size \delta = |\mu - \mu_0| / \sigma, where smaller values indicate quicker detection as \delta increases; for example, with ARL_0 = 370, ARL_1 is less than 44 for a 1\sigma shift and less than 6.3 for a 2\sigma shift. Exact ARL values are computed using methods, which model the CUSUM statistic's state transitions as a discrete-state absorbing Markov process and solve for the fundamental matrix to obtain the expected time to absorption, or via integral equations that account for the continuous distribution of increments in the CUSUM statistic. A simple approximation for ARL_1 is h / (\delta - k) for \delta > k, with more precise approximations adjusting for overshoot. The operating characteristic (OC) curve for the CUSUM chart depicts the probability of signaling (1 - \beta, where \beta is the probability of not signaling) as a function of the shift size \delta, illustrating the chart's detection sensitivity across varying magnitudes of process changes; steeper curves indicate better discrimination between in-control and out-of-control states. This curve is derived from ARL computations and helps evaluate trade-offs in parameter selection for specific shift detection goals. Factors affecting CUSUM performance include the reference value k, which tunes sensitivity to the targeted shift size (typically k = \delta^*/2 for a shift of interest \delta^*), and the decision interval h, which balances ARL_0 against detection speed. The chart excels in detecting small shifts (\delta < 2\sigma), where it outperforms Shewhart charts by accumulating subtle deviations more effectively, but it can be slower for large shifts (\delta > 2\sigma) due to the cumulative nature of the statistic.

Implementation

Parameter Selection

The selection of parameters for a CUSUM chart is crucial for achieving desired detection performance, primarily involving the reference value k and the decision interval h. These parameters are chosen to optimize the chart's sensitivity to specific process shifts while controlling the false alarm rate, often guided by the in-control average run length (ARL_0). The reference value k is typically set to half the magnitude of the targeted shift in the process mean, expressed in standardized units. For a shift from the in-control mean \mu_0 to an out-of-control mean \mu_1, where the shift size is \delta \sigma with \delta = |\mu_1 - \mu_0| / \sigma and \sigma the process standard deviation, k = \delta / 2. This choice minimizes the out-of-control average run length (ARL_1) for the specified shift \delta, making the chart most sensitive to that level of change; a common default is k = 0.5 for detecting a 1\sigma shift. The decision interval h determines the threshold for signaling an out-of-control condition and is selected to achieve a target in-control ARL_0, such as 370, which corresponds to a Type I error rate comparable to a 3\sigma Shewhart chart. For k = 0.5, h \approx 4.77 yields ARL_0 \approx 370; h = 5 yields ARL_0 \approx 465 in common two-sided implementations. More precise values are obtained from ARL tables, software simulations, or approximations like Markov chain methods. Key considerations include estimating the process standard deviation \sigma from historical in-control , often using the sample standard deviation from an Phase I sample of at least 20–30 subgroups to ensure stability. The desired shift sensitivity influences k, with smaller k values enhancing detection of minor shifts but increasing false alarms if h is not adjusted accordingly. For non-normal , where assumptions may not hold, \sigma can be estimated via bootstrap resampling to approximate the of the and derive limits. An alternative to the tabular CUSUM is the V-mask representation, where the angle \theta of the mask arms relates to h and k via \tan(\theta/2) = k / (h \sqrt{1 + k^2}), but the tabular form is generally preferred for computational accuracy and ease of in modern software.

Chart Construction and Interpretation

To construct a , the cumulative sum statistic S_i^+ (for detecting increases) or S_i^- (for decreases) is calculated for each sample or and plotted against time or sample number i. When the process is in control, the plotted points exhibit random variation around zero; persistent upward drift in S_i^+ signals a increase, while upward drift in S_i^- indicates a decrease. The decision interval H, typically scaled as H = h \sigma where h is a standardized reference value and \sigma is the process standard deviation, establishes the limit; an out-of-control signal occurs if S_i^+ > H or S_i^- > H. An alternative to tabular limits is the V-mask method, originally proposed by G. A. Barnard, which overlays a V-shaped on the cumulative plot. The 's arms have a related to the reference value k (e.g., k = \frac{[\delta](/page/Delta) \sigma}{2} for a shift size \delta) and a height tied to the decision parameter h; a signal is triggered if the path of subsequent points crosses either arm of the . This geometric approach allows visual assessment of shifts, with the slid backward along the plot to pinpoint the onset of the change. Interpreting a CUSUM signal involves assessing the direction and magnitude of the drift: an upper signal (S_i^+ > H) suggests the process mean has increased, prompting downward adjustments to recenter the process, while a lower signal (S_i^- > H) indicates a decrease, requiring upward adjustments. Upon signaling, the cumulative sum is typically reset to zero to restart monitoring from the in-control state, enabling ongoing detection of new shifts. For practical implementation, CUSUM charts can be automated using statistical software, such as the qcc package in , which provides functions for computing and plotting the statistics, or relevant modules in Python's statsmodels library for residual-based CUSUM analysis.

Examples

Numerical Example

To illustrate the application of the CUSUM control chart for individual observations, consider a synthetic dataset of 20 measurements from a normally distributed process with in-control \mu_0 = 10 and standard deviation \sigma = 1. The chart is designed to detect an upward shift of \delta = 1 (i.e., to a new of 11), using the reference value k = 0.5 and decision interval h = 4. The first eight observations are generated from N(10, 1), followed by a shift beginning at the ninth observation, with the remaining values generated from N(11, 1). For simplicity, the values are taken as exactly 10 for i=1 to $8and exactly 11 fori=9 to &#36;20, which represents a representative case for demonstrating the computations. The upper CUSUM statistic for detecting an upward shift is computed as S_i^+ = \max\left(0, S_{i-1}^+ + x_i - (\mu_0 + k)\right), with S_0^+ = 0. Similarly, the lower CUSUM statistic for a downward shift is S_i^- = \max\left(0, S_{i-1}^- + (\mu_0 - k) - x_i\right), with S_0^- = 0. An out-of-control signal occurs if S_i^+ > h or S_i^- > h. For each x_i, first compute the deviation x_i - \mu_0 - k = x_i - 10.5 for the upper arm and (\mu_0 - k) - x_i = 9.5 - x_i for the lower arm. The cumulative sums are then updated iteratively, resetting to zero if the increment would make them negative. The following table presents the observations, deviations for the upper arm, and the resulting S_i^+ and S_i^- values:
ix_ix_i - 10.5S_i^+S_i^-
110.0-0.50.00.0
210.0-0.50.00.0
310.0-0.50.00.0
410.0-0.50.00.0
510.0-0.50.00.0
610.0-0.50.00.0
710.0-0.50.00.0
810.0-0.50.00.0
911.00.50.50.0
1011.00.51.00.0
1111.00.51.50.0
1211.00.52.00.0
1311.00.52.50.0
1411.00.53.00.0
1511.00.53.50.0
1611.00.54.00.0
1711.00.54.50.0
1811.00.55.00.0
1911.00.55.50.0
2011.00.56.00.0
In this example, the upper CUSUM signals an out-of-control condition at i=17 when S_{17}^+ = 4.5 > 4, indicating detection of the upward shift. The lower CUSUM remains at zero throughout, as expected for an upward shift. For these parameters (k=0.5, h=4), the in-control average run length (ARL_0) is approximately 336, and the out-of-control ARL (ARL_1) for a \delta=1 shift is approximately 8.4, meaning the shift is detected on average after about 8-9 points post-shift—consistent with the signal here after 9 points.

Real-World Application

In , the cumulative sum (CUSUM) has been effectively applied to monitor tablet weight uniformity, ensuring consistent dosage and . A from a Turkish in 2015 illustrates its practical impact in the stage of tablet , where assignable causes such as wear or inadequate mixing can lead to shifts in weight. The process targeted a tablet weight of 250.8 mg with a historical standard deviation σ = 2.68 mg, using data from multiple batches involving hourly sampling of 10–15 tablets over compression runs. Parameters were selected via economic with software (edcc package), setting the sample size n = 15, sampling interval h ≈ 5.34 hours, decision interval H = 0.653, and reference shift δ = 1.4σ (approximately k = 0.7 in standardized terms) to balance detection speed and costs, with a low rate (FAR ≈ 0.00014). This setup, based on normally distributed weight data verified by the Kolmogorov-Smirnov test, enabled monitoring across extended production periods equivalent to over 100 batches. The CUSUM proved sensitive to small mean shifts, such as a 0.5σ deviation (about 1.34 mg, or roughly 0.5% of the target weight), detecting out-of-control conditions while maintaining a low rate. Compared to traditional Shewhart X-bar charts, the CUSUM approach excelled in early detection of subtle shifts, signaling faster for small changes that Shewhart might miss until larger deviations occur, thus reducing nonconforming tablets and associated waste through timely interventions like machine adjustments. challenges included the need for precise parameter tuning to avoid excessive leading to unnecessary and with existing systems, though the low FAR minimized operational disruptions. In the , CUSUM methods have extended to monitoring for , aiding resilience against disruptions like those from the by identifying shifts in inventory or delivery patterns.

Comparisons and Variants

Comparison to Other Control Charts

The cumulative sum (CUSUM) control chart offers distinct advantages over the Shewhart chart, particularly in detecting small process shifts. While both charts maintain similar in-control average run lengths (ARL_0 ≈ 370), CUSUM is more effective for small shifts (δ < 1σ), where Shewhart's lack of memory results in slower detection, often requiring ARL_1 > 150 for δ = 0.5. In contrast, Shewhart excels at identifying large shifts (δ > 2σ), with quicker signals due to its reliance on current observations alone. CUSUM's cumulative nature provides "memory" of past deviations, enhancing sensitivity to gradual or sustained changes that Shewhart might overlook. Compared to the exponentially weighted (EWMA) chart, CUSUM and EWMA both demonstrate strong performance for small shifts, with comparable ARL properties; however, CUSUM provides sharper detection for sustained step shifts at or near the designed size, while EWMA offers smoother responses in noisy environments due to its of older data. EWMA may detect transitory or very small shifts (δ < 0.8σ) slightly faster in some cases, but CUSUM's structure makes it preferable for persistent changes in stable processes. CUSUM is typically chosen for monitoring step shifts in processes with low variability, where quick detection of small to moderate changes (δ = 0.5–2σ) is critical. The following illustrates representative zero-state ARL_1 comparisons (ARL_0 ≈ 370, n=4, uncorrelated ) for Shewhart, CUSUM (k=0.5), and EWMA (λ=0.2) charts, highlighting CUSUM's superiority for δ ≤ 1.5σ.
Shift (δ)Shewhart ARL_1CUSUM ARL_1EWMA ARL_1
0.51553633
1.0441010
1.51555
2.0634
Despite these strengths, CUSUM has limitations, including slower reset after a (requiring the cumulative sum to drift back to zero) and more complex parameter selection and interpretation compared to Shewhart or EWMA.

Common Variants

The two-sided CUSUM chart extends the standard one-sided version by simultaneously monitoring for both upward and downward shifts in the process mean through parallel upper (C_i^+) and lower (C_i^-) cumulative sums. The upper CUSUM is updated as C_i^+ = \max(0, \bar{x}_i - (\mu_0 + k) + C_{i-1}^+), and the lower as C_i^- = \max(0, (\mu_0 - k) - \bar{x}_i + C_{i-1}^-), where \bar{x}_i is the sample mean, \mu_0 is the target mean, k is the reference value (often $0.5\sigma), and an out-of-control signal occurs if either exceeds the decision interval h. This dual monitoring enables detection of shifts in either direction, with the in-control average run length (ARL_0) approximated as \text{ARL}_0 \approx \frac{1}{\frac{1}{\text{ARL}_0^+} + \frac{1}{\text{ARL}_0^-}}, ensuring balanced performance for symmetric applications. The Fast Initial Response (FIR) variant modifies the standard CUSUM by initializing the cumulative sum with a nonzero head start, typically S_0 = h/2, to accelerate detection of shifts occurring early in . This head start biases the statistic toward the target direction of interest, reducing the initial ARL delay that plagues conventional CUSUMs when a shift begins near the process start, while the effect diminishes over time as subsequent observations accumulate. FIR improves out-of-control ARL by 30-40% for small shifts without substantially inflating ARL_0, making it suitable for scenarios where rapid startup sensitivity is critical, such as new production lines. Self-starting CUSUM addresses situations where in-control parameters \mu_0 and \sigma are unknown by recursively estimating them from incoming Phase I data to initialize and update the . The employs running estimates of the and deviation across all observations to standardize residuals, enabling the CUSUM to self-calibrate without a separate reference sample and detect location shifts as the estimates stabilize. This approach converges to performance comparable to known-parameter CUSUMs after sufficient data, avoiding the need for offline parameter estimation and supporting deployment in data-scarce environments. Other variants adapt CUSUM for variance or non-normal data. For variance, the chart applies a logarithmic to the sample variance, \log(S_i^2), to stabilize the scale and construct one- or two-sided CUSUMs that detect increases or decreases in with near-optimal ARL , outperforming direct S_i^2-based schemes under . For non-normal distributions like counts, CUSUM uses standardized residuals or likelihood ratio scores to approximate , enabling effective of the rate parameter \lambda for defect counts while handling and .

References

  1. [1]
    6.3.2.3. Cusum Control Charts
    If the process mean shifts upward, the charted CUSUM points will eventually drift upwards, and vice versa if the process mean decreases. A visual procedure ...
  2. [2]
  3. [3]
    Methods and formulas for CUSUM Chart - Support - Minitab
    CUSUM charts detect small shifts using cumulative sums of deviations. They use parameters h and k, and come in tabular and V-mask versions.
  4. [4]
    Monitoring surgical quality: the cumulative sum (CUSUM) approach
    CUSUM is defined as a statistical tool that graphically represents the sequential monitoring of cumulative performance of any dichotomized or continuous ...
  5. [5]
    Overview for CUSUM chart - Minitab - Support
    The CUSUM chart plots the cumulative sums (CUSUMs) of the deviations of each sample value from the target value. Because the CUSUM chart is cumulative, even ...
  6. [6]
    Cumulative Sum Chart (CUSUM) - Six Sigma Study Guide
    A cumulative sum chart (CUSUM) is a type of control chart used to detect the deviation of the individual values or subgroup mean from the adjusted target value.
  7. [7]
    [PDF] 1 What is a CUSUM Chart and When Should I Use One? Steven ...
    CUSUM charts are also an excellent choice when the objective is to control a characteristic to a target value by making necessary process adjustments (rather ...
  8. [8]
    Continuous Inspection Schemes - jstor
    The binomial process inspection scheme may be applied to continuous experiments designed to increase the rate of occurrence of a desirable but rare property. An ...
  9. [9]
    A note on efficient performance evaluation of the Cumulative Sum ...
    Jun 21, 2016 · ... CUSUM chart is equivalent to repetitive application of Wald's Sequential Probability Ratio Test (SPRT). The characteristics considered ...
  10. [10]
    Fast Online Changepoint Detection via Functional Pruning CUSUM ...
    Online algorithms for detecting a change in mean often involve using a moving window, or specifying the expected size of change. Such choices affect which ...
  11. [11]
    CUmulative SUM (CUSUM) chart - Analyse-it
    A cumulative sum (CUSUM) chart is a type of control chart used to monitor small shifts in the process mean. It uses the cumulative sum of deviations from a ...
  12. [12]
    [PDF] A Review of Sequential Analysis - DTIC
    To improve the sensitivity of the Shewhart control chart, Page (1954, not included in annotations) modified Wald's theory of sequential hypothesis testing to.
  13. [13]
    Cumulative Sum Charts: Technometrics - Taylor & Francis Online
    This paper, presented orally to the Gordon Research Conference on Statistics in Chemistry in July 1960, traces the development of process inspection schemes.
  14. [14]
    Machine learning adjusted sequential CUSUM-analyses are ...
    This study explores the efficacy of machine learning adjusted sequential CUSUM (Cumulative Sum) analyses in monitoring post-surgical mortality.2. Methodology · 3. Results · 5. Discussion
  15. [15]
    [PDF] 9.1 The Cumulative Sum Control Chart
    The cumulative sum (cusum) procedure monitors process means by cumulating deviations from a standard value. It is effective for detecting small shifts in the  ...
  16. [16]
    6.3.2.3.1. Cusum Average Run Length
    In practice, might be set equal to ( μ ^ 0 + μ 1 ) / 2 , where is the estimated in-control mean, which is sometimes known as the acceptable quality level, and ...
  17. [17]
    None
    ### Summary of ARL Computation Methods for CUSUM Control Charts
  18. [18]
    Defining the V-Mask for a Two-Sided Cusum Scheme
    The dimensions of the V-mask can be specified using two distinct sets of two parameters. \theta , defined as half of the angle formed by the V-mask arms ...Missing: hk | Show results with:hk
  19. [19]
    Statistics stats - statsmodels 0.14.4
    Calculate recursive ols with residuals and Cusum test statistic ... This function attempts to port the functionality of the oaxaca command in STATA to Python.
  20. [20]
    [PDF] A Real Application on Economic Design of Control Charts with R ...
    Oct 10, 2015 · A CASE STUDY IN A PHARMACEUTICAL INDUSTRY. A pharmaceutical company was selected for application of this study. The company is one of the.
  21. [21]
    [PDF] arXiv:2211.12091v1 [cs.LG] 22 Nov 2022
    Nov 22, 2022 · In this paper, we attempt to detect an inflection or change- point resulting from the Covid-19 pandemic on supply chain.
  22. [22]
    [PDF] Effectiveness of Conventional CUSUM Control Chart for Correlated ...
    Abstract—Control charts, one of the important tools of quality control, are also known as Shewhart charts or process behavior charts. Page (1954) was the ...
  23. [23]
    Exponentially Weighted Moving Average Control Schemes
    Mar 12, 2012 · An extensive comparison reveals that EWMA control schemes have average run length properties similar to those for cumulative sum control schemes ...
  24. [24]
    [PDF] The CUSUM and the EWMA Head-to-Head - School of Statistics
    Mar 17, 2014 · '' In particular, Shewhart charts are not competitive for detecting small but sustained shifts in the process. (Hawkins and Olwell 1998; ...
  25. [25]
    A Cumulative Sum Control Chart for Monitoring Process Variance
    Feb 21, 2018 · It is found that the performance of a CUSUM chart is comparable to that of the corresponding EWMA chart. ... He is a Member of ASQC. Please ...
  26. [26]
    [PDF] 9.6 Counted Data Cusum Control Charts
    When the process goes out of control, the counts will increase and their distribution may depart from being Poisson. For example, this would occur if defects ...