Fact-checked by Grok 2 weeks ago

Control chart

A control chart, also known as a Shewhart chart, is a graphical tool in (SPC) used to monitor, control, and improve process performance by plotting data points over time against predefined upper and lower control limits, with a central line representing the process average. These charts distinguish between common cause variation—random fluctuations inherent to the process—and special cause variation—unusual events signaling instability or the need for corrective action. Developed by physicist in 1924 while working at Bell Telephone Laboratories, the control chart emerged as a response to manufacturing inconsistencies observed during early telephone equipment production, marking the foundation of modern practices. Shewhart's innovation was first documented in an internal memo on May 16, 1924, where he proposed using probability-based limits (typically set at three standard deviations from the mean) to detect deviations that could indicate assignable causes of variation. This approach revolutionized industrial statistics by shifting focus from inspection to prevention, influencing subsequent methodologies like and . Control charts are categorized primarily into two types based on the nature of the : those for variables (continuous measurements, such as dimensions or weights) and those for attributes ( counts, such as defects or nonconformities). Common variable charts include the X-bar chart for subgroup means and the R chart for subgroup ranges, while attribute charts encompass the for proportions defective and the for total defects. Selection of the appropriate chart depends on , subgroup size, and characteristics, ensuring accurate detection of shifts, trends, or instability. Widely applied across industries including , healthcare, and services, control charts enable real-time monitoring to maintain , reduce waste, and enhance quality, with ongoing advancements incorporating software for automated analysis and integration with for predictive insights.

Introduction

Definition and Purpose

A control chart is a graphical tool that displays a time-sequenced plot of points from a , accompanied by a centerline representing the average and upper and lower control limits derived from statistical measures of variability, enabling the assessment of performance over time. This visualization allows practitioners to observe patterns in the and determine whether the remains stable or exhibits signals of change. The primary purpose of a control chart is to detect shifts or trends in the process mean or variability, facilitating timely interventions to prevent defects and maintain consistent quality output. Within the framework of (SPC), which employs statistical methods to monitor, control, and improve performance, control charts play a central role by distinguishing between variation—random, inherent fluctuations expected in a stable —and special cause variation arising from identifiable external factors. This differentiation supports proactive decision-making to sustain stability without overreacting to normal fluctuations. For instance, in , a control chart might track the dimensions of machined parts collected at regular intervals, alerting operators to potential issues like if points exceed the control limits, thereby ensuring product conformity and reducing waste.

Basic Components

A control chart is a graphical that displays points representing measurements of a characteristic plotted sequentially over time or sample number. The x-axis typically denotes time or the order of observation, while the y-axis shows the measured values, providing a visual of . At the core of the chart is the centerline, which represents the average value of the process when it is in a of statistical control. This centerline is calculated as the of the plotted data points, given by the formula \bar{x} = \frac{\sum x_i}{n} where x_i are the individual measurements and n is the number of points. Parallel to this centerline are two horizontal lines: the upper limit (UCL) and the lower control limit (LCL), which define the boundaries within which process variation is expected under stable conditions. The space between the and LCL is often divided into zones to facilitate , with the region between the centerline and each limit further subdivided for assessing patterns in the . These components together enable the chart to monitor process stability by highlighting deviations from expected behavior. In contrast to run charts, which simply plot over time with a such as the but lack statistically derived limits, control charts incorporate the UCL and LCL to differentiate between normal process variation and unusual shifts.

Historical Development

Origins with Shewhart

The origins of the control chart trace back to the work of at Bell Telephone Laboratories in the early 1920s, where he sought to apply statistical methods to monitor and improve manufacturing processes. On May 16, 1924, Shewhart issued an internal memorandum to his supervisor, George D. Edwards, proposing the use of charts to plot sample averages over time as a means to distinguish between common and special causes of variation in production. This memo, often regarded as the first documented prototype of a control chart, emerged amid post-World War I challenges in the telephone industry, including increased demand for reliable equipment and persistent quality inconsistencies in manufacturing components like vacuum tubes and switches. Shewhart's early concepts centered on the idea of statistical control, where process data would be plotted against dynamically calculated limits to detect deviations signaling assignable causes of variation. A pivotal innovation was the introduction of three-sigma control limits, derived from the assumption of a for process measurements, which would encompass approximately 99.7% of observations from a stable process and flag outliers as potential issues requiring intervention. These limits provided a rational, economically grounded criterion for , balancing the costs of over-detection against the risks of undetected defects. Shewhart continued refining these ideas through the late 1920s, collaborating with colleagues at to test them on real manufacturing data. His comprehensive theoretical framework was first published in 1931 with the book Economic Control of Quality of Manufactured Product, which formalized control charts as tools for achieving economic efficiency in by integrating statistical theory with practical application. This work laid the groundwork for , emphasizing the distinction between inherent process variability and external disruptions.

Post-War Adoption and Evolution

Following , played a pivotal role in disseminating control chart methodologies internationally, particularly in . Invited by the Union of Japanese Scientists and Engineers in 1950, Deming delivered lectures on statistical quality control, emphasizing Shewhart control charts to distinguish common from special causes of variation and foster continuous process improvement. His efforts during the U.S. occupation contributed to 's post-war industrial revival, igniting a quality revolution that transformed manufacturing sectors like automotive and electronics by integrating control charts into everyday operations. This influence culminated in the establishment of the in 1951, an annual award by the Japanese Union of Scientists and Engineers to recognize excellence in quality control practices, which further institutionalized the use of control charts nationwide. In the United States, control charts gained formal traction through and standardization efforts. The U.S. Department of Defense issued MIL-STD-105A in 1950, incorporating attribute-based sampling procedures derived from statistical principles, including elements aligned with control chart for process inspection during wartime production transitions to peacetime. This standard facilitated the broader adoption of control charts in defense contracting and manufacturing, ensuring consistent quality oversight. Building on this, the and developed ANSI/ASQ Z1.4 in 1971, providing guidelines for attribute inspection sampling that complemented control chart applications in industry, promoting their use beyond contexts for ongoing process . The post-war period also saw refinements to attribute control charts, building on their initial development in the 1930s and 1940s at Bell Laboratories, where p-charts and np-charts were introduced for monitoring defect rates in production. During the 1950s and 1960s, these charts evolved through practical applications in diverse industries, with enhancements in limit calculations and sensitivity to small shifts, driven by wartime lessons and peacetime efficiency demands; for instance, adaptations for batch processes improved detection of non-conformities in high-volume manufacturing. International accelerated in the with the ISO 7870 series, offering comprehensive guidelines for control chart . First published in 1993 as a general guide (ISO 7870:1993), the series provided unified procedures for establishing limits, selecting chart types, and interpreting signals, facilitating global adoption in systems. Subsequent revisions, such as ISO 7870-1:2007, expanded on philosophical underpinnings and chart varieties, emphasizing their role in proactive process control, while ISO 7870-2:2013 specifically addressed Shewhart control charts. Recent milestones include the 2020 update to ISO 22514-3, which integrates control chart principles into machine performance studies for discrete parts, supporting modern applications like automated data collection in digital environments while referencing ISO 7870 for chart construction and validation.

Fundamental Principles

Statistical Foundations

Control charts are grounded in the principles of , particularly , which underpins the determination of control limits. Walter Shewhart developed the foundational approach in 1924, establishing limits at three standard deviations (3σ) from the process mean, as this encompasses approximately 99.73% of data points in a stable process assuming normality. This empirical rule balances the risk of false alarms (Type I errors) against the detection of significant shifts, ensuring in process monitoring. The 3σ criterion was chosen not solely for probabilistic purity but for its practical effectiveness in distinguishing random fluctuations from assignable causes of variation. The application of control charts parallels hypothesis testing in statistical inference, where the null hypothesis posits a stable process under common-cause variation, and out-of-control signals represent rejection of this hypothesis in favor of special-cause variation. Each plotted point or pattern triggers a test against the null, with control limits defining the critical region (e.g., beyond 3σ corresponding to a low probability, about 0.27%, of false rejection under normality). This framework allows sequential monitoring without predefined sample sizes, adapting to ongoing data collection while controlling overall error rates through the rarity of signals in stable conditions. Rational subgrouping forms a critical sampling strategy to isolate within-subgroup variation, which primarily reflects common causes, while between-subgroup differences highlight potential special causes. Shewhart advocated forming subgroups from consecutive units produced under uniform conditions to minimize external influences and maximize sensitivity to process shifts. For instance, in variables charts, subgroups of size n (typically 4–5) are selected to estimate short-term variability accurately, ensuring control limits reflect true process capability rather than sampling artifacts. While control charts traditionally assume for precise probabilistic interpretation, this requirement is often relaxed due to the (CLT), which states that the distribution of sample means (or subgroup statistics) approaches as subgroup size increases, even if individual observations are non-normal. For small subgroups (n ≥ 2), the CLT provides approximate , making 3σ limits robust for averages and ranges in many practical scenarios. However, severe or outliers may still inflate false alarms, underscoring the need for data transformation or non-parametric alternatives when CLT approximations falter.

Types of Process Variation

In the framework of , process variation is categorized into two primary types: variation and special cause variation. This dichotomy, originally introduced by as "chance causes" and "assignable causes" of variation, forms the foundational principle for interpreting control charts. Later refined by into the terms "common" and "special," it distinguishes between predictable, inherent fluctuations and unpredictable, external disruptions in a manufacturing or production process. Common cause variation refers to the random, inherent fluctuations that are an intrinsic part of any stable , arising from numerous small, unavoidable factors within the system itself. These variations are predictable in aggregate, as they follow a consistent pattern over time and affect all outputs similarly, contributing to the natural "noise" in the . In a stable system, common cause variation alone indicates control, where the output remains within expected limits without external , though it may still lead to defects if the variation is too wide relative to specifications. For example, gradual machine wear that causes minor, consistent shifts in product dimensions exemplifies variation, as it stems from the normal operation of the equipment. Special cause variation, in contrast, involves non-random, assignable shifts due to specific, identifiable external factors that disrupt stability. These variations are unpredictable and irregular, often resulting in outliers or trends that signal an unstable system requiring immediate corrective action to restore . Unlike causes, special causes are not inherent to and can be traced to particular events, making them amenable to targeted removal or . An is tool breakage during operation, which suddenly alters output quality and introduces abrupt deviations beyond normal limits. Shewhart's dichotomy underpins control chart signals, where points within limits reflect common cause variation (indicating stability and predictability), while excursions beyond limits or non-random patterns alert to special causes (demanding investigation and correction to prevent ongoing instability). This classification enables practitioners to focus improvement efforts appropriately: reducing common cause variation requires systemic changes to narrow the process spread, whereas addressing special causes involves eliminating transient anomalies to achieve and maintain stability.

Construction of Control Charts

Establishing Control Limits

Control limits in a control chart define the boundaries within which process variation is expected to occur under stable conditions, typically set symmetrically around the centerline to encompass common cause variation while flagging potential special causes. These limits are statistically derived to minimize false alarms while ensuring timely detection of process shifts. The standard approach, pioneered by , uses three standard deviations (3-sigma) from the process mean, providing a balance between sensitivity and reliability. For an individuals control chart, which monitors single measurements without subgroups, the upper control limit (UCL) and lower control limit (LCL) are calculated as follows: UCL = \bar{x} + 3\sigma LCL = \bar{x} - 3\sigma where \bar{x} is the of the individual observations, and \sigma is the estimated standard deviation. This assumes \sigma is known or reliably estimated from baseline data, ensuring the limits reflect the inherent process variability. In subgroup-based charts, such as the X-bar and R chart for monitoring averages and ranges, control limits incorporate subgroup size to account for reduced variability in averages. The UCL and LCL for the X-bar chart are given by: UCL = \bar{\bar{x}} + A_2 \bar{R} LCL = \bar{\bar{x}} - A_2 \bar{R} where \bar{\bar{x}} is the grand average of subgroup means, \bar{R} is the average subgroup range, and A_2 is a constant from standard tables that adjusts for subgroup size n (e.g., A_2 = 0.729 for n=4), derived as A_2 = 3 / (d_2 \sqrt{n}) with d_2 being the expected range factor for normal distributions. These factors ensure the limits equate to approximately 3-sigma equivalents for the subgroup means. The 3-sigma criterion for control limits is grounded in the normal , where approximately 99.73% of observations fall within \pm 3\sigma of the mean, yielding a low rate of about 0.27% (or 0.0027 probability) for points exceeding the limits when the process is in . Shewhart selected this threshold empirically to limit unnecessary interventions while capturing significant deviations, as points beyond these limits occur roughly once every 370 samples on average. Establishing control limits often begins with trial limits computed from an initial set of baseline data, typically 20-25 subgroups, to assess process stability. If out-of-control signals are detected and assignable causes are addressed, the data points associated with those signals are removed, and revised limits are recalculated from the remaining in-control data to better represent the stable process state. Revised limits should only be updated with substantial new evidence, such as after 30 or more additional points or a major process change, to avoid instability in ongoing monitoring.

Estimating Process Variability

Estimating process variability is a critical step in constructing charts, as it provides the measure of , typically the deviation \sigma, needed to set appropriate limits that reflect variation. This estimation relies on sample from the process, assuming and unless otherwise addressed. Various methods are employed depending on the size and , ensuring the estimate is unbiased and representative of the underlying process . For data consisting of individual measurements (subgroup size n=1), where direct subgroup ranges or standard deviations cannot be computed, the sample standard deviation from a historical of individual observations serves as an estimator for \sigma. This is calculated using the for the sample standard deviation: \hat{\sigma} = \sqrt{\frac{\sum_{i=1}^{n} (x_i - \bar{x})^2}{n-1}} where x_i are the individual observations, \bar{x} is the sample mean, and n is the number of observations. This approach is particularly useful when a large historical is available, allowing for a direct assessment of overall process dispersion without relying on paired differences. When data are collected in subgroups of size greater than one, the range method offers a simple and efficient way to estimate \sigma, especially for smaller sizes (n \leq 10). The average range \bar{R} is first computed as the of the ranges within each , where the for a is the difference between its maximum and minimum values. The process standard deviation is then estimated as \hat{\sigma} = \bar{R} / d_2, with d_2 being an unbiased constant derived from the of the for a , tabulated based on size (for example, d_2 = 2.059 for n=4). This method leverages the 's sensitivity to variation while correcting for bias through the constant. For larger subgroup sizes (n > 10), the standard deviation method using an provides a more precise estimate by incorporating all data points rather than just extremes. Here, the average subgroup standard deviation \bar{s} is calculated as the mean of the sample standard deviations from each subgroup, and \hat{\sigma} = \bar{s} / c_4, where c_4 is another unbiased constant from statistical tables (dependent on n). This approach reduces the influence of outliers compared to the range and is preferred when computational resources allow full variance calculation within subgroups. In cases where process data deviate from , direct application of these estimators may lead to inaccurate control limits, as the constants d_2 and c_4 assume a . To address this, one common strategy involves using a moving range of two consecutive observations to estimate variability for individual data, treating pairs as subgroups of size 2, or applying data transformations (such as Box-Cox) to approximate before . These adjustments help maintain the chart's sensitivity to special causes without altering the core framework.

Practical Setup Procedures

Implementing a control chart requires a systematic approach starting with the selection of the and subgroup size. For variables , such as measurements, an X-bar and R chart is often appropriate, with subgroup sizes of 4 to 5 observations commonly used in initial studies to efficiently estimate process variability while maintaining chart sensitivity. Larger subgroups, such as 10 or more, may be employed for standard deviation-based charts when higher precision is needed, but smaller sizes suffice for most range-based applications. For attribute , p or np charts are used for proportions or numbers of nonconforming units, while c or u charts are used for defect counts, with subgroup sizes based on sample availability (often 20 to 50 units for proportion-based charts to ensure reliable estimates). Data collection follows, focusing on rational subgrouping to ensure the chart accurately distinguishes common from special cause variation. Rational subgroups are formed by sampling consecutive items produced under identical conditions, such as from the same machine shift, to minimize within-subgroup variation and maximize potential between-subgroup differences that could signal process shifts. This contrasts with convenience sampling, which groups data for logistical ease and may inflate within-subgroup variation, leading to wider control limits and reduced detection power. A baseline period of 20 to 30 subgroups, collected in time order over a period representing normal operations (e.g., several hours or days depending on process speed), is recommended to establish initial limits from stable process behavior. Once collected, plot the provisional control chart using the subgroup averages (or proportions) and ranges (or counts) against time or subgroup number, with the centerline as the grand average and limits estimated from the data's variability. If out-of-control signals appear during this initial plotting, investigate and eliminate assignable causes before revising the chart with the remaining in-control points to refine the limits. Software tools facilitate automation of these steps, reducing manual calculation errors. offers basic templates for plotting and limit computation, suitable for simple implementations. More advanced options like provide built-in functions for subgroup analysis, rational subgroup verification, and dynamic updating, enabling efficient handling of larger datasets and integration with variability estimation methods. The baseline period's stability is crucial, as limits derived from non-stable data can mask true process issues; thus, confirm process steadiness through preliminary checks before finalizing the chart.

Interpretation and Signal Detection

Rules for Out-of-Control Signals

Control charts employ standardized rules to detect out-of-control signals, which indicate non-random patterns suggestive of special cause variation rather than variation. These rules enhance the ability to identify process shifts, trends, or other anomalies beyond simple exceedance of control limits. The foundational Shewhart rules, introduced by in the early development of control charts, focus on basic indicators of instability. The primary rule signals an out-of-control condition if any point falls outside the upper or lower control limits, typically set at three standard deviations from the centerline. Later refinements identified runs of points as potential signals; for instance, seven or more consecutive points on one side of the centerline suggest a process shift. These rules prioritize simplicity to distinguish assignable causes from chance variation. In the 1950s, the Western Electric Company expanded upon Shewhart's approach with a set of eight sensitizing rules outlined in their Statistical Quality Control Handbook, aimed at detecting subtler non-random patterns while maintaining practical applicability in industrial settings. These include: one point beyond three standard deviations from the centerline (Zone A); two out of three consecutive points beyond two standard deviations (in Zone A or beyond) on the same side; four out of five consecutive points beyond one standard deviation (in Zone B or beyond) on the same side; eight consecutive points on the same side of the centerline; six consecutive points steadily increasing or decreasing; fifteen consecutive points within Zone C (the central third, ±1σ from centerline, indicating stratification); fourteen consecutive points alternating up and down; and any other unusual or non-random pattern. The expansions, such as trends of six points and shifts of eight points, were designed to flag gradual changes or level shifts that the basic rules might miss. Lloyd S. Nelson further refined these concepts in 1984 by proposing eight sensitizing rules for Shewhart control charts, building on prior work to increase detection sensitivity across various pattern types. Notable among Nelson's rules are: one point beyond three standard deviations; nine points in a row on the same side of the centerline; six points in a row steadily increasing or decreasing; in a row alternating up and down; two out of three points in an outer third of the chart (Zone A or beyond) on the same side; four out of five points in an outer third (Zone B or beyond) on the same side; fifteen points in a row in the central third (Zone C) far from the centerline; and a run of eight points on both sides of the centerline with none in the central third. These eight rules encompass and extend the set, offering comprehensive coverage for diverse out-of-control scenarios. In practice, these rules—Shewhart's foundational ones, Electric's expansions, and Nelson's detailed set—are applied sequentially to charts, starting with the most straightforward signals and progressing to more complex patterns. This sequential evaluation boosts the chart's to real process issues while minimizing false alarms from random variation, ensuring timely intervention without overreaction.

Assessing Run Length and Sensitivity

The average run length (ARL) serves as a primary probabilistic measure for evaluating the performance of control charts, defined as the expected number of samples required to detect an out-of-control condition. It quantifies both the chart's stability under in-control conditions and its responsiveness to process shifts. The in-control ARL (ARL0), which indicates the average time between s, is approximately 370 for a standard Shewhart chart using 3-sigma limits, reflecting a low false alarm rate of about 0.27%. The ARL is computed using the formula ARL = 1 / p, where p represents the probability of generating a signal at any given sampling point under the specified conditions. For out-of-control scenarios, the out-of-control ARL (ARL1) measures detection speed and diminishes as the magnitude of the process shift grows; for instance, a 1-sigma shift in the process on a Shewhart individuals results in an ARL1 of approximately 43, indicating faster signaling compared to larger shifts that can reduce it further to near 1. This metric highlights how ARL1 provides a for comparing effectiveness across shift sizes. Assessing sensitivity involves analyzing trade-offs in design parameters to balance detection promptness against operational costs. Tighter limits, such as 2-sigma rather than 3-sigma, decrease ARL0 to reduce time to s but elevate the false alarm rate, potentially disrupting stable processes unnecessarily. Conversely, incorporating combinations of sensitizing rules can lower ARL1 for small shifts while maintaining an acceptable ARL0, optimizing overall utility without excessive over-. To obtain exact ARL values, especially for charts employing multiple rules or non-standard setups, simulation methods like Markov chains are employed, modeling as a sequence of states based on recent observations to derive the run length distribution and its expectation. This approach enables precise computation of ARL under complex conditions where analytical solutions are intractable, ensuring reliable performance evaluation.

Classification of Control Charts

Charts for Variables Data

Control charts for variables data are designed to monitor continuous measurements from a production , such as dimensions, weights, or temperatures, by tracking both the process mean and variability over time. These charts assume the follow a and are particularly useful for detecting shifts in the or dispersion of the process. The primary types include paired charts for subgrouped and charts for observations, with control limits typically set at three deviations from the centerline to achieve an average run length of approximately 370 for in-control processes under . The X-bar and chart pair is a foundational for monitoring means and , commonly applied when sizes are small (typically n ≤ 10). The X-bar plots the of each , with its centerline at the grand mean \bar{\bar{x}}, upper control limit () given by \bar{\bar{x}} + A_2 \bar{[R](/page/R)}, and lower control limit (LCL) by \bar{\bar{x}} - A_2 \bar{[R](/page/R)}, where \bar{R} is the and A_2 is a constant depending on size n. The accompanying chart monitors variability by plotting , with centerline \bar{R}, D_4 \bar{R}, and LCL D_3 \bar{R}, using constants D_3 and D_4. These factors, derived from the expected distribution under , ensure unbiased estimates of process standard deviation \sigma \approx \bar{R}/d_2, where d_2 is another tabulated constant. For example, with n=5, A_2 = 0.577, D_3 = 0, and D_4 = 2.115. This combination effectively signals special cause variation when points exceed limits or exhibit non-random patterns. For larger subgroups (n > 10), the X-bar and s chart provides a more efficient alternative by using sample standard deviations instead of ranges for variability estimation, as s offers better precision for bigger samples. The X-bar chart uses the same centerline, with UCL \bar{\bar{x}} + A_3 \bar{s} and LCL \bar{\bar{x}} - A_3 \bar{s}, where \bar{s} is the average sample standard deviation and A_3 is the size-dependent . The s chart plots sample standard deviations, with centerline \bar{s}, UCL B_4 \bar{s}, and LCL B_3 \bar{s} (noting B_3 = 0 for n ≤ 6). These constants are based on the for s under normality, yielding \sigma \approx \bar{s}/c_4, with c_4 another bias-correction . For n=5, A_3 = 1.427, B_3 = 0, and B_4 = 2.089. This approach reduces sensitivity to outliers in range calculations and is preferred in modern applications with automated . When rational subgrouping is impractical, such as in low-volume production, the individuals (I) and moving range (MR) chart monitors single measurements and pairwise differences. The I chart plots individual values, with centerline at the overall average \bar{x}, UCL \bar{x} + 2.66 \bar{MR}, and LCL \bar{x} - 2.66 \bar{MR}, where \bar{MR} is the average moving range (typically over n=2 consecutive points) and 2.66 = 3 / d_2 with d_2 = 1.128 for n=2. The MR chart tracks variability via these differences, with centerline \bar{MR}, UCL 3.268 \bar{MR}, and LCL at 0. This method estimates \sigma \approx \bar{MR}/1.128, though it is less sensitive to small shifts compared to subgroup charts due to the lack of within-subgroup averaging.
Subgroup Size (n)X-bar and R FactorsX-bar and s Factors
A₂D₃
21.8800
31.0230
40.7290
50.5770
These factors, standardized in statistical s, facilitate consistent limit calculation across applications.

Charts for Attribute Data

Control charts for attribute data are designed to monitor processes where quality characteristics are evaluated through classification or counting rather than precise measurement, such as determining whether items are defective or nonconforming, or tallying the number of defects per unit. These charts are particularly useful in scenarios where data is discrete and binary (conforming/nonconforming) or count-based, contrasting with charts for variables data that handle continuous measurements. Developed as part of Walter Shewhart's foundational work on , attribute charts assume underlying probability distributions like the for proportions and the for counts, enabling the detection of shifts in process performance. The monitors the proportion of defective or nonconforming units in a sample, making it suitable for processes where each item is inspected for acceptability. It is based on the , where the proportion nonconforming p follows a for each unit. The center line is the average proportion \bar{p}, calculated as the total number of defectives divided by the total sample size across subgroups. Control limits are established at three standard deviations from the center line to account for common-cause variation: \text{UCL} = \bar{p} + 3 \sqrt{\frac{\bar{p}(1 - \bar{p})}{n}}, \quad \text{LCL} = \bar{p} - 3 \sqrt{\frac{\bar{p}(1 - \bar{p})}{n}} where n is the subgroup sample size, which may vary between subgroups. If the lower control limit falls below zero, it is typically set to zero. This chart is effective for varying sample sizes and helps identify increases in defect rates, such as in assembly line inspections. The np-chart, closely related to the p-chart, tracks the actual number of defective units rather than the proportion, assuming a constant sample size across subgroups. It is also grounded in the binomial distribution, with the center line at n \bar{p}, where n is the fixed subgroup size and \bar{p} is the average proportion from historical data. The control limits are: \text{UCL} = n \bar{p} + 3 \sqrt{n \bar{p} (1 - \bar{p})}, \quad \text{LCL} = n \bar{p} - 3 \sqrt{n \bar{p} (1 - \bar{p})} This chart simplifies interpretation when sample sizes are uniform, as points represent integer counts of defectives, and it is often preferred in fixed-inspection scenarios like batch testing where the number of nonconformances directly indicates process issues. For processes involving countable defects or nonconformities per inspection unit—such as scratches on a surface or errors in a document—the c-chart is employed to monitor the total number of defects in samples of constant size. Assuming a Poisson distribution for rare events, the center line is the average number of defects \bar{c}, and the control limits are symmetric around it due to the distribution's properties: \text{UCL} = \bar{c} + 3 \sqrt{\bar{c}}, \quad \text{LCL} = \bar{c} - 3 \sqrt{\bar{c}} The lower limit is set to zero if negative. This chart is ideal for fixed-unit inspections, like counting flaws in a fixed-length wire, and signals out-of-control conditions when defect counts deviate significantly, indicating special causes like tool wear. When sample sizes vary or the focus is on defects per unit rather than total defects, the u-chart standardizes the data by plotting the average number of defects per unit. It follows a Poisson-based approach similar to the c-chart but adjusts for varying subgroup sizes n_i. The center line is \bar{u}, the overall average defects per unit, and the control limits for each subgroup are: \text{UCL}_i = \bar{u} + 3 \sqrt{\frac{\bar{u}}{n_i}}, \quad \text{LCL}_i = \bar{u} - 3 \sqrt{\frac{\bar{u}}{n_i}} with the lower limit floored at zero. This chart accommodates irregular sampling, such as in service processes where inspection units differ (e.g., errors per page in variable-length reports), providing a normalized view of defect rates over time.

Advanced and Specialized Charts

Advanced control charts extend traditional methods by incorporating cumulative or weighted historical data to detect small, persistent shifts in process parameters, offering greater sensitivity for complex monitoring scenarios where standard Shewhart charts may lag. These charts are particularly useful in processes exhibiting gradual drifts, multivariate interactions, or infrequent events, enabling earlier intervention to maintain quality. The cumulative sum (CUSUM) chart accumulates deviations from a target value to signal subtle changes in the process mean, making it effective for detecting small shifts that might otherwise go unnoticed. Developed by E.S. Page in , the computes two one-sided statistics: an upper sum for increases and a lower sum for decreases, resetting to zero when they cross the target. Standard parameters include a reference value k = 0.5\sigma, which targets shifts of one standard deviation, and a decision interval h = 5\sigma, balancing false alarms and detection speed with an in-control average run length around 370. Similarly, the exponentially weighted (EWMA) chart applies a factor to past observations, emphasizing recent data while retaining influence from earlier points to uncover small to moderate mean shifts. Introduced by H.V. Roberts in , the EWMA statistic is updated as Z_t = \lambda x_t + (1 - \lambda) Z_{t-1}, where \lambda controls the weighting; values between 0.1 and 0.3 are typical, with lower \lambda enhancing sensitivity to smaller shifts over longer periods. Control limits are set at three standard deviations from the target, providing quicker signals than Shewhart charts for drifts as small as 0.5 to 2 standard deviations. For multivariate processes, the Hotelling's T^2 chart monitors multiple correlated variables simultaneously by measuring the squared from the process mean vector, accounting for to detect shifts in any direction. Applied to individual observations, it uses the statistic T^2 = \mathbf{x}' \mathbf{S}^{-1} \mathbf{x}, where \mathbf{S} is the estimated from initial data, with upper control limits derived from the scaled by degrees of freedom. This approach, originating from Harold Hotelling's 1931 work on multivariate analysis, excels in settings like semiconductor manufacturing where variables such as thickness and defect rates interact. Specialized charts address rare events, such as defects occurring less than once per subgroup on average. The G-chart, based on the geometric distribution, tracks the number of opportunities or time units between occurrences, plotting these intervals to signal increases in event frequency. Control limits are calculated using the reciprocal of the estimated event rate, making it suitable for healthcare monitoring, like days between infections, without requiring constant subgroup sizes. Modern extensions include self-starting charts, which eliminate the need for a separate Phase I estimation by recursively updating parameters from incoming , ensuring immediate from the first observation. Douglas M. Hawkins proposed self-starting variants in 1987, using studentized residuals to adapt limits dynamically while maintaining statistical properties like an in-control run length near 370; these have been extended to multivariate and EWMA frameworks for deployment in high-variability environments.

Applications and Performance Evaluation

Industrial and Modern Applications

Control charts have been extensively applied in traditional manufacturing sectors, particularly in automotive lines, where they monitor critical dimensions and variations to ensure part and reduce defects. For instance, in automotive production, Shewhart control charts are used to track variables such as component tolerances during stamping and , enabling early detection of special causes like machine wear or material inconsistencies. In healthcare, control charts are employed to manage patient wait times by plotting metrics like average wait durations over time, helping identify systematic delays due to staffing or workflow issues rather than random variation. This approach has been implemented in units to stabilize wait times and improve service delivery, as demonstrated in monitoring studies in Brazilian healthcare facilities. In modern applications, control charts integrate with (IoT) technologies for real-time monitoring in dynamic environments, allowing continuous data collection from sensors to update charts instantaneously and facilitate proactive adjustments. Post-2020 trends show enhancements to control charts improving in , where algorithms complement traditional limits by identifying subtle deviations in multivariate data streams, such as vibration or temperature in industrial equipment. For example, autoencoders combined with control charts have been applied to injection molding processes to predict failures earlier than conventional methods, reducing in settings. Software tools play a key role in implementing control charts across these applications, with providing real-time SPC modules for automated chart generation and dashboards in cloud environments. JMP offers interactive visualization for exploratory analysis, supporting rapid prototyping of charts in quality improvement projects. In open-source options, Python's statsmodels library enables customizable control chart construction, integrating seamlessly with data pipelines for large-scale industrial use. Case studies illustrate the impact of control charts in implementations, such as a manufacturing firm that used X-bar and charts to monitor process , resulting in over $150,000 in annual savings through reduced and improved in quality checks. In Industry 4.0 contexts within smart factories, control charts are embedded in cyber-physical systems for digital process control, aligning with guidelines for smart that emphasize integration with standards to enhance overall factory efficiency.

Metrics of Chart Effectiveness

The effectiveness of control charts is often quantified using operating characteristic (OC) curves, which plot the probability of detecting a shift (, or 1-β) against the size of the shift in the process parameter, typically expressed in deviation units. These curves help evaluate a chart's , showing that larger shifts are detected with higher probability, while small shifts may require larger sample sizes for adequate . For example, in Shewhart charts, the OC curve indicates low for shifts smaller than 2 deviations, necessitating supplementary rules or alternative charts for improved detection. Average run length (ARL) serves as a key metric for comparing chart performance, with in-control ARL (ARL₀) measuring false alarm frequency and out-of-control ARL (ARL₁) assessing detection speed. Standard 3-sigma Shewhart charts yield an ARL₀ of approximately 370 under assumptions, meaning a occurs about every 370 samples on average. In contrast, cumulative sum () charts excel at detecting small shifts; for a 1-sigma shift, achieves an ARL₁ of around 10, far outperforming Shewhart's ARL₁ of about 43, due to its cumulative nature that amplifies subtle deviations over time. These ARL comparisons highlight 's superiority for shifts between 0.5 and 2 standard deviations, though Shewhart remains preferable for abrupt, large changes. Control charts' power can be biased by deviations from assumptions, particularly non- and , which inflate s and distort ARL estimates. For non-normal distributions, such as skewed or heavy-tailed data, Shewhart charts like the individuals chart exhibit heightened sensitivity, producing rates 4 to 5 times higher than under , as the fixed limits fail to account for asymmetric tails. , common in continuous processes, further biases performance by violating , leading to decreased ARL₀ (more frequent for positive ) and potentially slower detection of true shifts unless residual-based adjustments are applied. These effects underscore the need for robust modifications, like nonparametric charts, to maintain power in real-world scenarios. Empirical studies, such as those benchmarked in Montgomery's work, confirm these metrics while emphasizing practical adjustments for implementation. For instance, the 3-sigma Shewhart ARL₀ of 370 assumes ideal conditions, but simulations reveal that real-world factors like estimation error in Phase I reduce it to 200-300, requiring wider limits or for . Montgomery's benchmarks also illustrate 's edge in ARL for small shifts across industries, yet advocate approaches (e.g., Shewhart-supplemented ) to balance sensitivity and simplicity, with empirical ARL validations showing 20-30% faster detection in datasets after corrections.

Alternatives and Critical Perspectives

Alternative Statistical Methods

Process capability indices provide a static assessment of a process's ability to meet specification limits, serving as a complement to dynamic monitoring tools by quantifying potential and actual performance without relying on sequential charting. The index C_p measures potential capability assuming the process is centered, defined as C_p = \frac{USL - LSL}{6\sigma}, where USL and LSL are the upper and lower specification limits, and \sigma is the process standard deviation. In contrast, C_{pk} accounts for process centering and is calculated as C_{pk} = \min\left[ \frac{USL - \mu}{3\sigma}, \frac{\mu - LSL}{3\sigma} \right], where \mu is the process mean, offering a more realistic evaluation of short-term performance against specifications. These indices are particularly useful in initial process design and validation phases, enabling engineers to predict defect rates; for instance, a C_{pk} > 1.33 typically indicates adequate process capability in manufacturing. Time series models, such as (ARIMA), address limitations in traditional control charts when data exhibit , modeling temporal dependencies to forecast and monitor processes more accurately. (p,d,q) combines autoregressive (p), differencing (d), and (q) components to stationarize non-stationary series, allowing residuals to be analyzed for deviations indicative of process shifts. In , ARIMA is applied to autocorrelated data by fitting the model to historical observations and using the resulting residuals in control procedures, which improves detection of out-of-control conditions compared to assuming . For example, in chemical processes with inherent lag effects, ARIMA-based monitoring has demonstrated superior sensitivity to mean shifts in simulated AR(1) data with correlation coefficients up to 0.8. Machine learning techniques for anomaly detection have emerged as powerful alternatives, particularly for complex, high-dimensional datasets where traditional methods falter, with adoption accelerating after 2020 due to advances in computational efficiency. , an ensemble method that isolates anomalies by randomly partitioning data points, excel in settings by requiring no labeled training data and achieving linear , making them suitable for process monitoring. In applications, isolation forests have been integrated to detect subtle faults in multivariate data, outperforming classical charts in scenarios with non-normal distributions or rare events. Neural networks, including autoencoders and recurrent variants like LSTMs, further enhance detection by learning nonlinear patterns in sequential data; for instance, convolutional neural networks applied to streams in fabrication have identified anomalies in post-2020 industrial trials. These methods are gaining traction in 4.0 environments for their adaptability to and ability to handle . Hybrid approaches blend traditional and advanced techniques to monitor specialized data structures, such as spatial or variations, providing nuanced insights beyond univariate charts. Funnel charts, which display data points against varying control limits that narrow with increasing sample size, facilitate the comparison of performance across units or subgroups, commonly used in healthcare and sectors to identify outliers in rates like incidences. Profile monitoring extends this by treating functional or spatial data as curves or surfaces, applying or nonparametric models to detect shifts in shape or location; for example, in monitoring machined parts, phase II EWMA charts on parameters have effectively signaled deviations in nonlinear relationships with average run lengths under 50 for small shifts. For two-dimensional spatial count data, such as defect maps on wafers, spatial-structure models combined with multivariate charts account for , improving shift detection in high-density profiles. These hybrids are especially valuable in modern applications like , where spatial dependencies dominate.

Limitations and Criticisms

Control charts, while foundational to , have been criticized for violating the through their reliance on frequentist p-value-like thresholds for signaling out-of-control conditions, which emphasize hypothetical repetitions over the evidential strength of observed . This approach can lead to decisions that depend on unobserved , undermining the sufficiency of the for inference. Additionally, the average run length (ARL) metric, commonly used to evaluate chart performance, assumes data independence, but this fails under , where observations are serially correlated, resulting in inflated false alarms or reduced sensitivity to shifts. For instance, in processes with moderate positive , standard Shewhart charts exhibit ARLs that deviate significantly from nominal values, compromising reliability. Practically, control charts overemphasize numeric or attribute , often overlooking qualitative factors such as , environmental conditions, or procedural nuances that influence variation but are not easily quantified or plotted. The rigid application of 3-sigma limits further ignores contextual specifics, like non-normal distributions or varying risk tolerances, potentially leading to inappropriate signaling in real-world settings. Prior to 2020, traditional control charts struggled with high-volume due to computational demands and assumptions of stationarity, limiting their in environments; recent calls advocate augmentation, such as for , to handle dynamic, high-dimensional streams more effectively. Philosophically, cautioned against over-reliance on control charts, arguing that misinterpreting variation as special causes prompts tampering—unnecessary adjustments that amplify overall variation and destabilize processes.

References

  1. [1]
    Control Chart - Statistical Process Control Charts | ASQ
    ### Summary of Control Chart Setup Procedures (ASQ)
  2. [2]
    Understanding Control Charts and Concepts of Variation - JMP
    A control chart is a tool to determine whether a process is stable (ie in control) or out of control and in need of attention.
  3. [3]
    Walter A Shewhart, 1924, and the Hawthorne factory - PubMed Central
    Walter Shewhart described the first control chart which launched statistical process control and quality improvement.
  4. [4]
    The First Control Chart - The W. Edwards Deming Institute
    May 19, 2021 · Dr. Walter Shewhart developed the control chart in the 1920s while working for Bell Telephone. The text of the memo that including the image of the control ...
  5. [5]
    Walter Shewhart and the History of the Control Chart - 6sigma
    Nov 17, 2017 · The control chart is a tool developed by Walter Shewhart and extended significantly by various others through time which is meant to show whether processes in ...
  6. [6]
    Control Charts - Six Sigma Study Guide
    Four types of control charts exist for attribute data. P charts plot the proportion of defective items, and np charts are for the number of defectives. U charts ...
  7. [7]
    [PDF] Introduction to Statistical Process Control Charts - SAS Support
    Statistical Process Control Charts. (also referred to as Shewhart charts, or SPC charts) are a simple-to-use visual presentation of performance over time. While ...<|control11|><|separator|>
  8. [8]
    Statistical Process Control (SPC) Charts: Ultimate Guide [2025]
    Jul 12, 2024 · SPC charts, also known as control charts or Shewhart charts, are powerful graphical tools that enable us to study how a process changes over time.
  9. [9]
    6.3.1. What are Control Charts? - Information Technology Laboratory
    Control charts are used to routinely monitor quality. Depending on the number of process characteristics to be monitored, there are two basic types of control ...
  10. [10]
  11. [11]
  12. [12]
    Economic Control Of Quality Of Manufactured Product
    Jan 25, 2017 · Economic Control Of Quality Of Manufactured Product. by: Shewhart, W. A.. Publication date: 1923. Topics: North. Collection: digitallibraryindia ...
  13. [13]
    [PDF] A guide to creating and interpreting run and control charts
    Elements of a run chart. A run chart shows a measurement on the y-axis plotted over time (on the x-axis). A centre line (CL) is drawn at the median.
  14. [14]
    6.1.1. How did Statistical Quality Control Begin?
    He issued a memorandum on May 16, 1924 that featured a sketch of a modern control chart. Shewhart kept improving and working on this scheme, and in 1931 he ...
  15. [15]
    Full article: The 100th anniversary of the control chart
    Jan 17, 2024 · In May 1924, Walter A. Shewhart wrote a short technical memorandum for his supervisor at Bell Telephone Laboratories, George Edwards. This ...
  16. [16]
    6.3.2.1. Shewhart X-bar and R and S Control Charts
    This chart controls the process variability since the sample range is related to the process standard deviation.
  17. [17]
    Three Sigma Limits Statistical Calculation With Example - Investopedia
    Shewhart set three standard deviation (3-sigma) limits as a rational and economic guide to minimum economic loss. Around 99.7% of a controlled process occurs ...What Is a Three Sigma Limit? · Understanding Three Sigma...
  18. [18]
    Economic Control of Quality of Manufactured Product - Google Books
    Mar 9, 2018 · Author, Walter Andrew Shewhart ; Edition, reprint ; Publisher, D. Van Nostrand Company, Incorporated, 1931 ; Original from, the University of ...Missing: publication | Show results with:publication
  19. [19]
  20. [20]
    [PDF] What Happened in Japan? - The W. Edwards Deming Institute
    The purpose of this article is to offer some obser- vations on the causes of success in Japan, from the viewpoint of the statistical control of quality, with.Missing: revolution | Show results with:revolution
  21. [21]
    How was the Deming Prize Established
    A prize to commemorate Dr. Deming's contribution and friendship in a lasting way and to promote the continued development of quality control in Japan.Missing: charts revolution 1951
  22. [22]
    Using MIL-STD-105 As a Process Control Procedure | Quality Digest
    Oct 8, 2018 · The difference in philosophy highlights the point that MIL-STD-105 is actually a “process control” specification that integrates process ...
  23. [23]
  24. [24]
    Walter A. Shewhart and the Evolution of the Control Chart, 1917–1954
    Aug 6, 2025 · This study analyzes the factors that shaped Walter Shewhart's 1924 development of the control chart at Bell Telephone Laboratories.
  25. [25]
    ISO 22514-3:2020 - Statistical methods in process management
    2–5 day deliveryISO 22514-3:2020 describes short-term machine performance studies for repeatability, but not for tool wear or large data sets.Missing: 22514-3:2021 control charts digital
  26. [26]
    6.3.2. What are Variables Control Charts?
    Variables control charts, proposed by Shewhart, monitor a process's continuously varying quality, using control limits to determine if the process is in  ...
  27. [27]
    Three Sigma Limits and Control Charts - SPC for Excel
    Walter Shewhart is regarded as the “father of statistical quality control.” He developed the control chart almost 100 years ago. Control charts were described ...Introduction · Probability and Control Charts · Shewhart and The Origin of...Missing: source | Show results with:source
  28. [28]
    15.1 Control Charts – Introduction to Statistics – Second Edition
    Statistical process control procedures are similar to hypothesis testing. The null hypothesis is that the process is in-control and the alternative ...Missing: analogy | Show results with:analogy
  29. [29]
    Remember Rational Subgrouping? - SPC for Excel
    Rational subgrouping applies to all control charts. The within subgroup variation is used to filter out the normal variation on the subgroup averages.
  30. [30]
    Control Charts and the Central Limit Theorem - SPC for Excel
    This month's publication examines the relationship between control charts, the central limit theorem, three sigma limits, and the shape of the distribution.
  31. [31]
    [PDF] shewhart1.pdf - PQM-online
    Page 1. Page 2. Page 3. Page 4. Page 5. Page 6. Page 7. Page 8. Page 9. Page 10. Page 11. Page 12. Page 13. Page 14. Page 15. Page 16. Page 17. Page 18 ...
  32. [32]
    Common and Special Causes of Variation - Quality America
    Shewhart used the term chance cause, Dr. W. Edwards Deming coined the term common cause to describe the same phenomenon. Both terms are encountered in practice.
  33. [33]
    6.3.2.2. Individuals Control Charts
    Individuals control charts use individual measurements and a moving range to measure process variability. The moving range is the absolute difference between ...
  34. [34]
    3.4. Shewhart charts — Process Improvement using Data
    The defining characteristics of a Shewhart chart are: a target, upper and lower control limits (UCL and LCL). These action limits are defined so that no ...
  35. [35]
    6.3.1.2. Individuals Control Charts
    Another way to construct the individual chart is by using a standard deviation based on a historical data set (if available). Then we can obtain the chart from ...
  36. [36]
    Methods and formulas for estimating sigma for R Chart - Minitab
    Thus, if r is the range of a sample of N observations from a normal distribution with standard deviation = σ, then E(r) = d 2(N)σ. d 3(N) is the standard ...
  37. [37]
    [PDF] 6. Process or Product Monitoring and Control
    Jun 27, 2012 · Statistical Process Control (SPC). Typical process control techniques ... Before the control chart parameters are defined there is one.<|control11|><|separator|>
  38. [38]
    Characteristics of Shewhart Charts - SAS Help Center
    Dec 21, 2018 · Shewhart (1931) advocated selecting rational subgroups so that variation within subgroups ... subgroups, the control limits are typically ...Missing: original | Show results with:original
  39. [39]
    Statistical Process Control Charts: Sampling Frequency, Subgroups ...
    Correlated data produces too many false alarms. Of important note on the Shewhart +/- 3-sigma SPC chart is that 99.73% of the data is contained within this ...
  40. [40]
    Data Analysis Software | Statistical Software Package - Minitab
    Minitab Statistical Software can look at current and past data to discover trends, find and predict patterns, uncover hidden relationships between variables.Free Trial · Minitab Solution Center · What's New · Features
  41. [41]
    [PDF] Statistical Quality Control Handbook
    This Handbook, as indicated in the "Foreword," was prepared to assist Western Electric people in performing Western Electric Com- pany work.
  42. [42]
    The Shewhart Control Chart—Tests for Special Causes
    Feb 22, 2018 · (1984). The Shewhart Control Chart—Tests for Special Causes. Journal of Quality Technology: Vol. 16, No. 4, pp. 237-239.Missing: rules | Show results with:rules
  43. [43]
    Average Run Length (ARL) Tables - SigmaXL
    In a Shewhart Individuals Control Chart, ARL0 = 1/α = 1/(0.00135*2) = 370.4. On average, we will see a false alarm once every 370 observations. Note that this ...
  44. [44]
    [PDF] Exact Results for Shewhart Control Charts With Supplementary ...
    These rules may be stated in the following form: An out-of- control signal is given if k of the last m standardized sample means fall in the interval (a, b), ...
  45. [45]
    6.3.3. What are Attributes Control Charts?
    The Shewhart control chart plots quality characteristics that can be measured and expressed numerically. We measure weight, height, position, thickness, etc. If ...
  46. [46]
    Economic Control of Quality of Manufactured Product - Google Books
    Bibliographic information ; Author, Walter A. Shewhart ; Edition, reprint ; Publisher, American Society for Quality Control, 1980 ; ISBN, 0873890760, 9780873890762.
  47. [47]
    6.3.2.3. Cusum Control Charts
    In particular, analyzing ARL's for CUSUM control charts shows that they are better than Shewhart control charts when it is desired to detect shifts in the mean ...
  48. [48]
    Cumulative Sum Charts: Technometrics - Taylor & Francis Online
    Apr 30, 2012 · This paper, presented orally to the Gordon Research Conference on Statistics in Chemistry in July 1960, traces the development of process inspection schemes.
  49. [49]
    [PDF] Cumulative Sum (CUSUM) Charts - NCSS
    The control limits are chosen as plus or minus h. The usual choice for k is 0.5 (for detecting one-sigma shifts in the mean) and h is typically set to 5. 4 ...
  50. [50]
    6.3.2.4. EWMA Control Charts - Information Technology Laboratory
    The Exponentially Weighted Moving Average (EWMA) is a statistic for monitoring the process that averages the data in a way that gives less and less weight to ...Missing: paper HV
  51. [51]
    The Exponentially Weighted Moving Average Procedure for ... - NIH
    The EWMA procedure (Roberts, 1959) was proposed to detect mean changes across time. It monitors a real-time running estimate of the average in a control chart ...
  52. [52]
    6.3.4.1. Hotelling Control Charts - Information Technology Laboratory
    The Hotelling T 2 distance is a measure that accounts for the covariance structure of a multivariate normal distribution. It was proposed by Harold Hotelling in ...
  53. [53]
    Multivariate Control Charts: The Hotelling T2 Control Chart
    The T2 control chart is used to detect shifts in the mean of more than one interrelated variable. The data can be in subgroups (like the X-R control chart) or ...Introduction to the T Control... · Constructing a T Control Chart · Out of Control Points
  54. [54]
    Overview for G Chart - Minitab - Support
    A G chart monitors the number of opportunities or days between rare events, like infections or medication errors, without needing large amounts of data.
  55. [55]
    Self‐Starting Cusum Charts for Location and Scale - Hawkins - 1987
    Using some theoretical properties of independence of residuals, two pairs of cusums are set up: one testing for constancy of location of the process, and the ...
  56. [56]
    Self-Starting Multivariate Control Charts for Location and Scale
    Nov 21, 2017 · Dr. Hawkins is a Professor in the School of Statistics. He is an ASQ Fellow. His email address is dhawkins@umn.edu.
  57. [57]
    Application of Statistical Process Control in Automotive Manufacturing
    Sep 6, 2024 · IV.​​ Control charts are useful in the automobile industry for tracking the dimensions of important parts and making sure that deviations are ...
  58. [58]
    [PDF] Control Chart for Automotive Stamped Parts: a Case Study
    This study deploys the multivariate control charting scheme to monitor the quality of a manufactured part in a Malaysian- based automotive body and parts ...
  59. [59]
    The Contribution of Variable Control Charts to Quality Improvement ...
    Sep 10, 2021 · Variable control charts contribute to quality improvement in healthcare by enabling visualization and monitoring of variations and changes in healthcare ...
  60. [60]
    application of control charts for monitoring the wait ing time in a ...
    Aug 7, 2025 · The patient waiting time was monitored with Control Chart for Individual Measurements and Moving Range and then it was determined the capability ...
  61. [61]
    Integration of IoT-based electromechanical control charts for real ...
    Sep 9, 2025 · This study introduces an IoT-enabled intelligent monitoring framework that integrates Shewhart and CUSUM control charts to enhance quality ...
  62. [62]
    Autoencoder with Statistical Process Control Chart for Anomaly ...
    Applied to predictive maintenance of injection molds, their approach, combining a new training algorithm with isolation forest anomaly detection, demonstrates ...
  63. [63]
    [PDF] A Review of Artificial Intelligence Impacting Statistical Process ...
    Mar 4, 2025 · While these rules enhance process anomaly detection, increasing the number of detection rules raised the risk of false alarms, making operator ...
  64. [64]
    Real-Time SPC | Statistical Process Control Software - Minitab
    Execute immediate process control monitoring with Real-Time SPC Powered by Minitab. Use the statistical process control software made for delivering value.Missing: Excel | Show results with:Excel
  65. [65]
    Minitab vs. JMP: A Better Way to Analyze, Predict, and Optimize
    From predictive analytics to SPC tools, Minitab delivers a powerful, user-friendly platform trusted by engineers, analysts, and quality professionals worldwide.Missing: control Python statsmodels
  66. [66]
    Exploring Statistical Software: Features, Costs, and Flexibility
    Oct 3, 2025 · Below, we'll explore how seven major platforms—Minitab, JMP, SigmaMagic, SigmaXL, SPSS, Statgraphics, R with RStudio, and Python with SciPy/ ...
  67. [67]
    Control Charts and Employee Engagement Helped This Company ...
    Oct 29, 2023 · Case Studies Control Charts. Control Charts and Employee Engagement ... As in most Six Sigma training classes, the teaching of control charts ...
  68. [68]
    (PDF) Industry 4.0 and Smart Systems in Manufacturing: Guidelines ...
    Mar 12, 2024 · This study aims to provide a framework for the integration of smart statistical digital systems into existing manufacturing control systems, ...
  69. [69]
    Average Run Lengths and Operating Characteristic Curves
    Control charts are typically used in establishing whether a process is in a state of statistical control or not. The performance of any control chart is an ...Missing: metrics | Show results with:metrics
  70. [70]
    [PDF] Determining Sample Size for a Control Chart - DigitalCommons@USU
    The central line, upper control limit, lower control limit, ini- tial sample size and subgroup size are important parts in computing control charts. In most ...
  71. [71]
    CUSUM ARL - SigmaXL
    The ARL1 for a small 1 sigma shift in mean is 6.35 which is faster than the 10.38 for FIR = 0. Now we will specify the CUSUM k parameter = 0.5 with a Shewhart ...
  72. [72]
    Cusum Charts Compared with Shewhart Charts - SAS Help Center
    A cusum chart is more efficient for detecting small shifts in the process mean, in particular, shifts of 0.5 to 2 standard deviations from the target mean ( ...Missing: ARL | Show results with:ARL
  73. [73]
    [PDF] Variables Control Charts - Support - Minitab
    Our simulation showed that the I chart is sensitive to nonnormal data. When the data are nonnormal, the I chart produces a false alarm rate that is 4 to 5 times ...
  74. [74]
    Robustness to non-normality and autocorrelation of individuals ...
    This paper studies the effects of non-normality and autocorrelation on the performances of various individuals control charts for monitoring the process mean ...
  75. [75]
    Robustness to non-normality and autocorrelation of individuals ...
    Aug 6, 2025 · This paper studies the effects of non-normality and autocorrelation on the performances of various individuals control charts for monitoring ...
  76. [76]
    6.1.6. What is Process Capability?
    Definitions of various process capability indices. C p = USL − LSL 6 σ C p k = min [ USL − μ 3 σ , μ − LSL 3 σ ]
  77. [77]
  78. [78]
    Applications of Control Charts Arima for Autocorrelated Data
    When a process follows an adaptable model, or when the process is a deterministic function, the data will be autocorrelated. Drawing the process of data is ...
  79. [79]
    [PDF] An Application of Filtering to Statistical Process Control
    In the first method, the basic idea is to model the autocorrelation structure in the original process using an autoregressive integrated moving average (ARIMA) ...
  80. [80]
    The GLRT for statistical process control of autocorrelated processes
    A wide range of ARIMA models are considered, with the conclusion that the best residual-based test to use depends on the particular ARIMA model used to describe ...<|separator|>
  81. [81]
    [PDF] A Review of Artificial Intelligence Impacting Statistical Process ...
    Mar 4, 2025 · Anomaly detection, tracing back to the 1960s (Grubbs 1969), identifies deviations from normal operations, including unknown failures. Unlike ...
  82. [82]
    Control chart-integrated machine learning models for incipient fault ...
    Jul 15, 2025 · The anomaly detection models proposed for this study are the one-class support vector machine and the isolation forest model. While the ...Missing: post- | Show results with:post-<|separator|>
  83. [83]
    Monitoring non-parametric profiles using adaptive EWMA control chart
    Aug 22, 2022 · This paper designs a novel control chart to monitor not only the regression parameters but also the variation of the profiles in Phase II applications using an ...
  84. [84]
    Control charts for monitoring two-dimensional spatial count data with ...
    Some studies on profile monitoring have considered autocorrelation within the profile. Maleki, Amiri, and Taheriyoun (2017) developed control charts by ...
  85. [85]
    [PDF] Statistical process control for monitoring nonlinear profiles - K-REx
    Three general approaches often used to implement SPC on profiles as quality characteristics include: the use of process parameters, the use of projected space, ...
  86. [86]
    Foundations of Statistical Quality Control - jstor
    (e.g. Nelson, 1982) violates the likelihood principle. Basu (1988) has ... The QMP chart is a control chart for analyzing defect rates. Quality rating ...
  87. [87]
    [PDF] The Likelihood Principle - Error Statistics Philosophy
    The Likelihood Principle. Author(s): James O. Berger, Robert L. Wolpert, M. J. Bayarri, M. H. DeGroot, Bruce M. Hill,. David A. Lane and Lucien LeCam. Source ...
  88. [88]
    Effects of autocorrelation on control chart performance
    Autocorrelation makes standard control chart limits inappropriate, as they are based on independence assumptions, and has a serious impact on quality.<|separator|>
  89. [89]
    Control charts for monitoring processes with autocorrelated data
    Performance criteria which are related to ARL measures are insufficient and inappropriate in the case of concurrent pattern identification. Finally, this ...
  90. [90]
    Statistical Process Control 101: The Problem with Tampering
    The funnel represents the process, the marble drop location is the feature being produced, and the target is the customer specification.
  91. [91]
    (PDF) Run Length Variability and Three Sigma Control Limits
    Aug 7, 2025 · This article provides an argument against the use of three sigma limits for control charts. An alternative proposal to use control limits ...
  92. [92]
    Impact of Process Tampering on Variation (Experiment)
    Dr. Deming demonstrated the impact of tampering using his well-known funnel experiment. Examples of tampering abound: from continuously adjusting machine ...Missing: reliance | Show results with:reliance