Fact-checked by Grok 2 weeks ago

Laboratory quality control

Laboratory quality control (QC) refers to the set of procedures and practices within a laboratory's quality management system designed to monitor the accuracy, precision, and reliability of test results, particularly during the analytical phase of testing, to detect, evaluate, and correct errors before patient or client reports are issued. It ensures that laboratory outputs meet established standards for timeliness and validity, thereby supporting informed decision-making in healthcare, research, and industrial applications. A core element of laboratory QC is internal quality control (IQC), which involves the routine use of control samples with known analyte values to verify the performance of analytical instruments, reagents, and operators on a daily basis. IQC employs statistical tools such as Levey-Jennings control charts and to identify systematic or random errors, allowing laboratories to maintain process stability and take corrective actions promptly. Complementing IQC is external quality assessment (EQA), also known as proficiency testing, where laboratories receive blind samples from an external provider and compare their results against peer laboratories or reference values to assess overall comparability and identify biases. These combined approaches form the backbone of QC, with EQA providing an independent benchmark for laboratory competence. The implementation of laboratory QC is mandated by international and national regulations to safeguard and ensure , such as under the (CLIA) in the United States, which requires ongoing monitoring of testing accuracy and precision. Similarly, the standard emphasizes QC as integral to , requiring laboratories to define strategies for control materials, frequency of testing, and continuous evaluation to minimize risks of erroneous results that could impact , , or outcomes. Effective QC programs also incorporate staff training, equipment maintenance, and to foster a culture of quality, ultimately enhancing trust in laboratory services across clinical, pharmaceutical, and environmental testing domains.

Fundamentals

Definition and Scope

Laboratory quality control encompasses the systematic processes and procedures implemented in laboratories to monitor, evaluate, and maintain the reliability, of analytical results throughout the testing process. This involves the use of statistical methods and control materials to detect and mitigate errors or variations in laboratory operations before results are released, ensuring that outputs meet predefined standards of . The scope of laboratory quality control applies broadly across diverse settings, including clinical laboratories where it supports diagnostic testing, such as assays for patient blood analysis; analytical laboratories focused on environmental or chemical testing to verify ; research laboratories developing and validating new analytical methods; and industrial laboratories ensuring the consistency of product testing in processes. For instance, in clinical settings, quality control safeguards patient care by minimizing diagnostic errors, while in environmental testing, it upholds standards for pollutant detection to protect . Historically, laboratory quality control emerged in the mid-20th century, with its foundational principles rooted in the application of to biological data during the 1950s; a seminal contribution was the 1950 introduction of control charts by Levey and Jennings, which enabled ongoing monitoring of analytical performance. This evolution built on earlier industrial quality practices but adapted them specifically for laboratory environments to address variability in biological and chemical analyses. At its core, laboratory quality control comprises four key components: planning, which involves establishing quality objectives and selecting appropriate control measures; , where standardized procedures and materials are applied in daily operations; , through which performance is continuously assessed using tools like control charts; and corrective actions, taken to investigate and resolve any identified deviations to restore system reliability.

Importance in Laboratory Operations

Laboratory quality control (QC) plays a pivotal role in safeguarding by minimizing errors that could lead to misdiagnosis and subsequent harm. Inadequate QC practices contribute significantly to diagnostic inaccuracies, with pre-analytical errors—such as improper sample collection or handling—accounting for 60-70% of all laboratory errors, often resulting in delayed or incorrect treatments that exacerbate patient conditions. For instance, false positives in tests, sometimes arising from laboratory contamination or processing mistakes, can trigger unnecessary biopsies, emotional distress, and invasive procedures for patients who do not have the disease. These errors not only endanger lives but also incur substantial financial losses, with inappropriate laboratory testing alone estimated to cost between $1.95 billion and $3.28 billion in 2019 in excess expenses due to repeat tests and unwarranted interventions. Furthermore, failures in QC expose laboratories to legal liabilities, including lawsuits when diagnostic errors lead to adverse outcomes, as seen in cases where mislabeled specimens result in wrongful treatments. Beyond immediate risks, robust QC enhances operational efficiency and fosters trust in laboratory results essential for clinical . Effective QC reduces error rates, leading to faster turnaround times for test results, which improves service and enables timely medical interventions. This error reduction also yields cost savings by avoiding the need for retesting and mitigating downstream healthcare expenditures, thereby optimizing in laboratory workflows. By ensuring reliable data, QC bolsters clinician confidence in using lab outputs for and planning, ultimately contributing to better outcomes and reduced variability in care delivery. QC is integral to total laboratory quality management systems (LQMS), which prioritize prevention over reactive correction to maintain consistent performance across all testing phases. LQMS frameworks integrate into broader processes, including personnel and , to proactively identify and mitigate risks before they impact results. This preventive approach aligns with established strategies that emphasize ongoing monitoring and validation to sustain high standards, distinguishing modern operations from error-prone corrective measures.

Quality Control Procedures

Internal Quality Control

Internal quality control (IQC) refers to the systematic processes implemented within a to monitor the ongoing of analytical measurements by incorporating samples with established target values into routine testing workflows. These materials, typically representing low, normal, and high concentration levels relevant to decision points, are analyzed concurrently with specimens to detect deviations in performance that could affect result reliability. The primary components of IQC include the selection of appropriate materials, defined measurement frequency, statistical evaluation rules, and predefined thresholds for acceptability, all designed to ensure consistent analytical without unnecessarily interrupting testing operations. IQC protocols generally require running controls at least once per analytical run or daily for high-volume tests, with laboratories tailoring frequency based on instrument stability, test complexity, and to balance detection sensitivity and operational efficiency. Acceptance criteria are evaluated using multirule systems, such as the , which apply statistical patterns to flag potential errors; for instance, a single control value exceeding the mean by 3 standard deviations (1_{3s}) or two consecutive values exceeding 2 standard deviations (2_{2s}) triggers rejection of the run to prevent erroneous reporting. These rules, originally developed for efficient error detection in quantitative assays, emphasize a combination of warning and rejection limits to minimize false alarms while identifying systematic issues like random errors or shifts. Control materials for IQC fall into two main categories: products, which are pre-formulated by manufacturers or third-party vendors with assigned values and stabilizers for broad coverage, and in-house preparations, such as pooled specimen aliquots customized to specific needs but requiring validation for and commutability. controls offer and but may introduce effects differing from routine samples, whereas in-house options can better mimic testing specimens at lower cost, though they demand rigorous to establish target values and . Laboratories must select controls that span the analytical measurement range, ideally including at least two levels per run, to effectively monitor performance across variations. Shifts, trends, or drifts in IQC data—indicating gradual changes in instrument calibration or performance—are identified through sequential plotting and statistical analysis, prompting adjustments such as recalculating control limits from at least 20 consecutive measurements once stability is restored. For example, a persistent upward trend might signal deterioration, while abrupt shifts could arise from environmental factors, requiring immediate investigation to differentiate random variation from true out-of-control events. Visualization tools like Levey-Jennings charts facilitate this detection by graphing control results against mean and standard deviation limits. When IQC fails, corrective actions prioritize result integrity by halting reporting of affected results, followed by targeted troubleshooting such as instrument recalibration, reagent replacement, or preventive maintenance to address the root cause. Laboratories then verify the fix through repeat QC runs until acceptability is confirmed, and may conduct retrospective retesting of specimens from the prior stable run to mitigate any impact on decisions. Documentation of all actions, including the failure investigation and resolution, is essential for compliance and continuous improvement, ensuring and informing future preventive strategies.

External Quality Assessment

External Quality Assessment (EQA), also known as proficiency testing (PT), is a systematic process where laboratories receive blinded samples from an external provider to evaluate their testing performance against peer laboratories or reference standards. This approach allows for inter-laboratory comparisons to ensure the accuracy and reliability of results in laboratory settings. Organizations such as the (CAP) and the (WHO) administer these programs, distributing samples that mimic routine specimens to detect systematic errors or biases not identifiable through internal processes alone. The process begins with the preparation and distribution of stable, commutable samples—often lyophilized human or bovine sera—by the EQA provider, ensuring they reflect real matrices without identifiable characteristics. Laboratories treat these samples as routine specimens, performing analyses with standard methods and personnel, then submit results electronically within specified deadlines, typically via online portals. Results are evaluated against target values derived from peer group consensus or assigned references, using statistical metrics such as z-scores, where a participant's deviation is calculated as z = \frac{x - \mu}{\sigma} (with x as the participant's result, \mu as the target mean, and \sigma as the standard deviation), flagging scores outside ±2 as potential issues. Peer group comparisons follow, with feedback reports detailing individual performance, trends, and educational insights to guide corrective actions. EQA programs offer significant benefits by laboratory performance, identifying method-specific biases, and supporting continuous , which enhances result reliability and across networks. They complement internal by revealing inter-laboratory variations and promoting best practices through educational resources. However, limitations include dependence on the accuracy of participant submissions, potential matrix effects from that may not fully replicate testing scenarios, and a focus primarily on analytical phases, overlooking pre- and post-analytical errors. Additionally, while EQA maintains levels, it may not drive substantial improvements beyond established thresholds without integrated internal monitoring. Participation in EQA is typically conducted quarterly—three to four times per year—to provide regular feedback, with more frequent cycles possible for high-risk analytes. For accreditation under standards like or CLIA, enrollment is mandatory, requiring successful participation in proficiency testing events as defined by the standards.

Monitoring Tools

Control Charts Overview

Control charts are graphical statistical tools employed in laboratory quality control to monitor the stability of analytical processes over time by plotting data against established limits, enabling the differentiation between common cause variation—random, inherent fluctuations expected in a stable process—and special cause variation, which signals assignable, non-random shifts requiring corrective action. Developed by in at Bell Laboratories, these charts provide a visual means to assess whether a laboratory's system remains in a state of statistical , thereby supporting timely detection of process deviations that could compromise result reliability. Their primary purpose in laboratories is to ensure consistent performance of analytical methods, reducing the risk of erroneous patient results through proactive identification of instability. Several types of control charts are utilized in laboratory settings, each suited to detecting specific patterns of variation. Shewhart control charts, the foundational type named after their inventor, are effective for monitoring individual measurements or small subgroups, focusing on larger shifts in process means or variances. For detecting smaller, gradual shifts, cumulative sum () charts accumulate deviations from the target mean, offering greater sensitivity to sustained changes than Shewhart charts. Exponentially weighted moving average (EWMA) charts, in contrast, emphasize recent data points through a weighting factor, making them particularly useful for identifying trends or small drifts in laboratory processes. The basic construction of control charts involves plotting sequential data points on a with time or run order on the x-axis. A central line represents the process , calculated from an initial stable period of data, while upper and lower control limits are typically set at ±3 standard deviations from the to encompass approximately 99.7% of expected variation under conditions. These limits are derived empirically from the laboratory's own control material data rather than theoretical specifications, ensuring relevance to the specific analytical system. Points falling outside these limits or exhibiting non-random patterns, such as runs or trends, indicate potential special causes warranting investigation. In laboratory applications, control charts are routinely applied to track the stability of measurements, such as glucose concentrations in samples or hemoglobin levels in assays, allowing technicians to verify instrument and performance before reporting patient results. This ongoing surveillance helps maintain the and accuracy of diagnostic testing, with charts updated daily or per run to reflect real-time process behavior. In clinical laboratories, variants like the Levey-Jennings chart adapt these principles specifically for internal data.

Levey-Jennings Chart

The Levey-Jennings chart, a graphical tool for monitoring , was developed in 1950 by Stanley Levey and E.R. Jennings specifically for applications. This chart adapts the principles to plot individual () measurements from reference samples, enabling laboratories to detect variations in analytical performance over time. It remains a foundational method in for ensuring the reliability of test results. Construction of a Levey-Jennings chart involves plotting daily or per-run QC values on the y-axis against sequential runs or dates on the x-axis. The chart features a central horizontal line representing the target mean value, calculated from an initial set of at least 20-30 stable QC measurements, along with parallel lines at ±1 standard deviation (SD), ±2 SD, and ±3 SD from the mean. These SD lines are derived from the same initial data set, providing visual boundaries for acceptable variation based on the normal distribution of QC results. Interpretation of the Levey-Jennings chart typically employs the Westgard multirule system, introduced in , which combines multiple statistical rules to distinguish between random and systematic s while minimizing false rejections. Key rules include 1_{3s} (a single QC value exceeding ±3 SD, indicating a possible random like an ), 2_{2s} (two consecutive values exceeding ±2 SD on the same side of the , suggesting a systematic shift), and R_{4s} (the range between consecutive values exceeding 4 SD, flagging potential random in paired controls). For example, a single point beyond the ±3 SD line might signal random from instrument noise, prompting immediate investigation, whereas a series of points trending above the could indicate systematic due to deterioration, requiring corrective action before resuming patient testing.

Validation and Verification

Method Validation

Method validation in laboratory quality control refers to the systematic of confirming that a new or modified analytical is suitable for its intended purpose prior to routine implementation, encompassing assessments of key performance characteristics such as accuracy, , , and reportable range. According to CLSI guidelines, this ensures the method meets predefined criteria for reliability and robustness in clinical or research settings. Similarly, ISO 15189:2022 specifies that laboratories must validate examination procedures to verify their fitness for use, particularly when developing or significantly altering methods, through objective evidence of performance. This one-time comprehensive study distinguishes validation from ongoing , focusing on establishing baseline method reliability. Key parameters evaluated during method validation include , assessed via within-run () and between-run () coefficients of variation (%), where typical targets aim for CV values below 5-10% depending on the and medical decision point. Accuracy is examined through studies, involving the addition of known amounts to samples and measuring the recovered, ideally 95-105% for most assays. testing screens for potential disruptions from substances like hemoglobins, lipids, or drugs, using protocols such as CLSI EP07 to quantify effects via paired-difference analysis. confirms proportional response across the analytical measurement range (), while the reportable range defines the clinically relevant span, often verified with dilutions or spikes. The validation process begins with , outlining sample types, concentrations (e.g., low, normal, high), number of replicates (typically 20-40 per level), and , as recommended by CLSI EP series documents. Data collection spans multiple days and operators to capture variability, with at least 5-20 runs for estimates per CLSI EP05-A3. Statistical analysis follows, including calculation of means, standard deviations, CV%, regression for linearity, and t-tests or for bias assessment against a reference per CLSI EP09-A3. Acceptance criteria require that the total analytical error (TE = + 1.96 × imprecision) does not exceed the allowable total error () derived from biological variation , such as those in the EFLM database, where for analytes like glucose might be set at 7.1% based on within-subject variation. For example, if biological variation yields a desirable imprecision goal of < 2.5% and < 2.6%, the method passes if total error remains within these limits across tested conditions. Failure prompts method refinement or rejection, ensuring patient safety and diagnostic utility.

Method Verification

Method verification serves as an abbreviated validation process for laboratories implementing quantitative measurement procedures that have already been validated by the manufacturer, focusing on confirming that the method performs acceptably under the laboratory's specific conditions. This process is guided by , which provides a protocol to verify precision and estimate bias (a measure of trueness) in a single experiment typically completed over five working days using two or three quality control materials at different concentrations. The protocol involves running multiple replicates (e.g., five per day) to assess within-run and between-run imprecision, ensuring the laboratory's results align with the manufacturer's claims, such as coefficient of variation limits. Key parameters evaluated include precision, trueness through spiking experiments, and comparability to a reference method. Precision is determined by calculating repeatability and reproducibility from replicate measurements, while trueness is assessed by comparing results from spiked samples or certified reference materials against expected values to estimate bias. For method comparison, patient samples are analyzed on both the new method and a reference procedure, with linear regression used to evaluate agreement; the regression line's slope and intercept indicate proportional and constant bias, respectively. Unlike the full method validation required for new or modified methods (detailed in ), verification emphasizes these targeted checks for transferred assays. Verification is conducted at initial implementation of the method, following major changes such as new reagent lots, instrument upgrades, or significant environmental shifts, and periodically, such as annually, to ensure sustained performance. Statistical acceptance criteria for method comparison studies typically require a slope between 0.95 and 1.05 (indicating minimal proportional error) and an intercept near zero (indicating negligible constant bias), with confidence intervals confirming the method's equivalence to the reference. These criteria help laboratories confirm that systematic errors remain within clinically acceptable limits, supporting reliable routine use.

Performance Metrics

Analytical Sensitivity and Specificity

Analytical sensitivity in laboratory testing refers to the ability of an assay to detect low concentrations of the target analyte, often quantified as the , which is the lowest concentration reliably distinguishable from background noise. According to , the LoD is calculated as the limit of blank (LoB) plus a multiple of the standard deviation of low-concentration samples, ensuring the assay's capability to identify true positives at minimal levels. Analytical specificity, conversely, measures the assay's ability to selectively detect the intended analyte without interference from similar substances or cross-reacting materials, evaluated through interference and cross-reactivity studies. In laboratory quality control, these analytical performance characteristics are distinct from diagnostic metrics. For qualitative laboratory tests, diagnostic sensitivity and specificity assess the test's ability to correctly identify diseased and non-diseased individuals, respectively, using a 2x2 contingency table that categorizes results based on the true disease status and test outcome. Diagnostic sensitivity, also known as the true positive rate, is calculated as the proportion of true positives (TP) among all actual positives: \text{Sensitivity} = \frac{\text{TP}}{\text{TP} + \text{FN}} where FN represents false negatives. Diagnostic specificity, or the true negative rate, is the proportion of true negatives (TN) among all actual negatives: \text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}} with FP denoting false positives. These metrics assess a test's discriminatory power in laboratory quality control, independent of disease prevalence.
Disease PresentDisease Absent
Test PositiveTPFP
Test NegativeFNTN
For example, consider a diagnostic assay for a pathogen where 100 samples from infected individuals yield 90 positives (TP = 90, FN = 10), and 100 from uninfected yield 95 negatives (TN = 95, FP = 5); the sensitivity is 90% and specificity 95%. This calculation, derived from the contingency table, guides verification of test performance under protocols. Several factors influence diagnostic sensitivity and specificity, including the selection of decision thresholds or cutoffs, which create trade-offs between detecting true positives and avoiding false positives. Disease prevalence in the tested population does not alter these intrinsic metrics but affects predictive values, emphasizing their role in quality control over clinical interpretation. Additionally, analytical sensitivity is closely tied to the , where suboptimal assay design or matrix effects can elevate the LoD, reducing sensitivity for low-abundance analytes. To optimize thresholds, receiver operating characteristic (ROC) curves plot sensitivity against 1-specificity across varying cutoffs, with the area under the curve (AUC) quantifying overall test performance; an AUC near 1 indicates excellent discrimination. Likelihood ratios further enhance clinical utility by combining sensitivity and specificity: the positive likelihood ratio (LR+) = sensitivity / (1 - specificity) indicates how much a positive result raises disease probability, while the negative likelihood ratio (LR-) = (1 - sensitivity) / specificity shows the effect of a negative result. These tools, recommended in , aid in selecting assays that balance detection reliability with minimal errors in laboratory settings.

Precision and Accuracy

In laboratory quality control, precision refers to the closeness of agreement between independent measurements of the same quantity under specified conditions, reflecting the reproducibility of results. It is inversely related to imprecision, which is quantified using the standard deviation (SD) of replicate measurements or the coefficient of variation (CV), calculated as CV = (SD / mean) × 100%. Imprecision arises from random errors in the analytical process, such as variations in pipetting or instrument noise, and is assessed through multiple analyses of homogeneous samples to ensure consistent performance across runs. Accuracy, in contrast, measures the closeness of the average measurement to the true or accepted value, primarily affected by systematic bias, defined as the difference between the observed mean and the true value. Bias can stem from factors like reagent instability or incomplete reactions and is evaluated using certified reference materials (CRMs), which are stable substances with certified concentrations traceable to international standards. By comparing laboratory results against CRM values, analysts detect and quantify bias to verify method truthfulness. A key metric integrating precision and accuracy is total analytical error (TE), which estimates the maximum expected error in a single measurement, combining bias and imprecision. It is commonly expressed as: \text{TE} = |\text{bias}| + 1.96 \times \text{imprecision} where imprecision is the CV for percentage-based errors (at 95% confidence). In practice, TE informs metrics, where sigma = (TEa - |bias|) / CV, and TEa is the allowable total error based on clinical needs; for example, a sigma value exceeding 4 indicates acceptable performance for analytes like sodium in labs, while values below 3 signal the need for improvement. To enhance and accuracy, laboratories employ instrument , which adjusts responses using standards to minimize systematic and ensure . Regular , often daily or per manufacturer guidelines, reduces deviations and maintains with standards.

Regulatory Standards

Accreditation and

Laboratory accreditation and compliance ensure that testing facilities meet established standards for quality, competence, and reliability in producing accurate results. The (ISO) provides key frameworks, such as :2022, which specifies requirements for quality and competence specifically tailored to medical laboratories, integrating () processes into the overall management system to support and result validity. Similarly, ISO/IEC 17025:2017 outlines general requirements for the competence, impartiality, and consistent operation of testing and calibration laboratories, emphasizing the incorporation of measures to validate methods and ensure traceable results across various sectors including environmental and industrial testing. These standards promote a holistic approach where is not isolated but embedded in laboratory operations, from pre-analytical to post-analytical phases, to minimize errors and enhance trust in laboratory outputs. Prominent accreditation bodies oversee compliance through rigorous evaluation processes. The () offers accreditation programs, including one aligned with :2022, involving peer-based inspections that review QC documentation, such as control charts and proficiency testing records, to verify adherence to standards. Joint Commission International (JCI) provides laboratory accreditation based on its standards, derived from ISO principles, with on-site surveys assessing QC integration, including documentation of standard operating procedures (SOPs) and staff competency evaluations. These audits typically occur every two to three years, focusing on evidence of QC performance monitoring and corrective actions to maintain accreditation status. Compliance with these frameworks mandates documented SOPs for all QC activities, ensuring and ; comprehensive staff training programs to build in QC techniques; and a commitment to continuous improvement through the cycle, which structures by planning QC objectives, implementing procedures, monitoring outcomes, and acting on findings to refine processes. Proficiency testing serves as an external validation tool within these compliance mechanisms, helping laboratories demonstrate ongoing adherence to standards. Post-2020 revisions to ISO standards have enhanced focus on and modern tools in laboratory QC. The 2022 edition of introduces a stronger emphasis on risk-based thinking, requiring laboratories to identify, assess, and mitigate risks to patient outcomes and operational integrity, integrated with QC strategies. It also accommodates advancements like digital QC tools for and , supporting efficient monitoring and compliance in evolving laboratory environments. These updates align with broader principles, fostering resilience against disruptions while upholding core QC integration.

Proficiency Testing Programs

Proficiency testing (PT) programs, also known as external quality assessment (EQA) schemes, involve the distribution of standardized samples to participating laboratories for analysis, enabling objective evaluation of performance against peer groups and facilitating inter-laboratory comparisons to ensure consistent and reliable results. These programs are essential for identifying systematic errors, method-specific biases, and opportunities for improvement in laboratory testing accuracy. One of the leading PT providers is the (), which offers surveys across 16 disciplines, including , , , and , serving over 20,000 laboratories worldwide. surveys typically distribute stabilized samples such as pooled , , or , targeting analytes like hemoglobin A1c in monitoring, glucose and in , and bacterial isolates in challenges. Similarly, the Royal College of Pathologists of Australasia Quality Assurance Programs (RCPAQAP), established in 1968, provides comprehensive EQA for disciplines including chemical pathology, , and . RCPAQAP distributes samples like liquid for and , covering analytes such as red cell , metanephrines, and HbA1c, alongside materials. Evaluation in these programs relies on peer group comparisons, where results are assessed against the group —the average value from laboratories using the same or —and the standard deviation (), which quantifies result variability. A common metric is the z-score, calculated as (laboratory result - peer group ) / , with performance graded as satisfactory if within ±2 (indicating alignment with peers), questionable if between ±2 and ±3 , or unsatisfactory if beyond ±3 , potentially triggering remedial actions. These ratings help laboratories , with providing detailed feedback on acceptability criteria derived from scientific validation. Challenges in PT include the use of non-commutable samples, which fail to mimic specimen behavior across methods due to matrix differences, leading to artificial biases not reflective of routine testing and potentially skewing peer comparisons. For instance, non-commutable EQA materials can introduce method-specific errors in multi-analyte panels, complicating accurate target value assignment and requiring providers like RCPAQAP to balance commutable and non-commutable formats. Failures necessitate , often revealing issues such as errors, instrument instability, or method biases; laboratories must investigate promptly, document corrective actions like recalibration or staff retraining, and monitor subsequent performance to prevent recurrence. In CAP programs, unsatisfactory ratings for regulated analytes prompt detailed reporting and may link to broader requirements under frameworks like CLIA. Globally, PT participation varies, with the mandating enrollment for moderate- and high-complexity laboratories under the (CLIA) of 1988, covering over 80 regulated analytes to ensure compliance with federal quality standards effective as of updates in 2024. In contrast, programs like RCPAQAP in and are voluntary but integral to national accreditation processes, with thousands of laboratories participating annually to demonstrate competency. This dichotomy highlights regional differences in enforcement, though international ISO 17043 standards promote harmonized PT practices worldwide.

References

  1. [1]
    [PDF] Overview of Quality Control for Qualitative and Semi-quantitative ...
    QC gives the laboratory confidence that test results are accurate and reliable before patient results are reported. This module explains how QC methods are ...
  2. [2]
    [PDF] Annex 1 WHO good practices for pharmaceutical quality control ...
    These guidelines provide advice on the quality management system within which the analysis of active pharmaceutical ingredients (APIs), excipients and ...
  3. [3]
    Internal Quality Controls in the Medical Laboratory: A Narrative ...
    Oct 5, 2024 · The use of IQC enables the laboratory to monitor the accuracy and precision of laboratory results. The use of appropriate IQC strategies has ...
  4. [4]
  5. [5]
    Overview of External Quality Assessment (EQA)
    Nov 2, 2009 · EQA is used to describe a method that allows for comparison of a laboratory's testing to a source outside the laboratory.
  6. [6]
    The Significance of External Quality Assessment Schemes for ... - NIH
    This review focusses on external quality assessment (EQA) schemes which are a tool for laboratories to examine and improve the quality of their testing ...
  7. [7]
    Individualized Quality Control Plan - CDC
    Sep 10, 2024 · Overview. The CLIA regulations require a laboratory to have QC procedures to monitor the accuracy and precision of the complete testing process.<|control11|><|separator|>
  8. [8]
  9. [9]
    [PDF] Laboratory Quality Management System - CDC Stacks
    The Laboratory Quality Management System covers organization, administration, standards, quality control, and includes an overview of the quality management ...
  10. [10]
    Quality control for the clinical laboratory - PubMed
    Quality control is a management discipline by which laboratories assure the validity of their results for bedside management of patients.
  11. [11]
    Practical considerations for laboratories: Implementing a holistic ...
    Nov 2, 2022 · A laboratory quality management system (LQMS) is an essential element for the effective operation of research, clinical, testing, or production/manufacturing ...
  12. [12]
    In the Lab - Quality Assurance and Quality Control | US EPA
    The QC process includes activities required during sample preparation and analysis to produce the desired data quality and to document the quality of the ...
  13. [13]
    Performance Standards for Quality Control Systems
    Jan 1, 2018 · Standard statistical quality control (QC) has been around since its introduction in 1950 by Levey and Jennings. Today, while most laboratories ...<|control11|><|separator|>
  14. [14]
    Internal quality control – past, present and future trends - PMC
    May 23, 2022 · This paper offers an historical view, through a summary of the internal quality control (IQC) models used from second half of twentyth century ...
  15. [15]
    404
    **Summary:**
  16. [16]
  17. [17]
    Westgard Rules | Multirules by James Westgard - Westgard QC
    ### Summary of Westgard Rules for Internal Quality Control
  18. [18]
    Proficiency Testing (PT)/External Quality Assessment (EQA) Programs
    We have a wide range of quality assessment programs for both clinical and anatomic pathology, from routine tests to highly innovative applications.
  19. [19]
    None
    ### Summary of EQA from LQMS 10. Assessment - EQA.pdf
  20. [20]
    [PDF] Fundamentals for External Quality Assessment (EQA)
    This document is written to assist colleagues in establishing and managing external quality assessment. (EQA) schemes at an early stage.
  21. [21]
    Proficiency Testing (PT)/External Quality Assessment (EQA ...
    For laboratories seeking CAP accreditation, enrollment in proficiency testing is required for a minimum of six months prior to requesting an accreditation ...
  22. [22]
    What is EQA - Acutecaretesting.org
    A major limitation of the PT scheme is the tendency to maintain the quality at a certain level and they are unable to stimulate improvement of quality above ...Missing: frequency | Show results with:frequency<|control11|><|separator|>
  23. [23]
    [PDF] Proficiency Testing and PT Referral brochure 10.15.24 - CMS
    Oct 15, 2024 · How does PT work? A HHS-approved PT program sends samples to your laboratory on a regular basis. (usually 3 times each year) and then ...
  24. [24]
  25. [25]
    Full article: The 100th anniversary of the control chart
    Jan 17, 2024 · This spring marks the 100th anniversary of the control chart. In May 1924, Walter A. Shewhart wrote a short technical memorandum for his supervisor at Bell ...
  26. [26]
    Exponentially Weighted Moving Average (EWMA) Control Charts for ...
    The most widely used control charts in analytical laboratories are the Shewhart charts, which can be used to monitor the mean value or precision of an ...
  27. [27]
    Comparison of control charts for monitoring clinical performance ...
    We compared four control charts for binary data: the Shewhart p-chart; the exponentially weighted moving average (EWMA) chart; the cumulative sum (CUSUM) chart ...
  28. [28]
  29. [29]
    Using statistical quality control techniques to monitor blood glucose ...
    The charts will be used to monitor glucose levels, reveal variations, and illustrate the effects of new protocols designed to manage glucose levels. MeSH terms.Missing: analytes hemoglobin
  30. [30]
    [PDF] Content Sheet 7-1: Overview of Quality Control for Quantitative Tests
    In order to develop Levey-Jennings charts for daily use in the laboratory, the first step is the calculation of the mean and standard deviation of a set of 20 ...
  31. [31]
    use of Control Charts in the Clinical Laboratory - Oxford Academic
    Stanley Levey, Ph.D., E. R. Jennings, M.D.; The use of Control Charts in the Clinical Laboratory*, American Journal of Clinical Pathology, Volume 20, Issue.
  32. [32]
    QC: The Levey-Jennings Control Chart
    This exercise is intended to show, in step-wise fashion, how to construct a Levey-Jennings control chart, plot control values, and interpret those results.Missing: origin Stanley ER 1950
  33. [33]
    "Westgard Rules" and Multirules
    The well-known Westgard multirule QC procedure uses 5 different control rules to judge the acceptability of an analytical run.Westgard Rules · QC - The Levey Jennings chart · What's Popular
  34. [34]
    Method Evaluation | Areas of Focus - CLSI
    As a key area of focus, CLSI develops standards for evaluating laboratory methods, ensuring accuracy, reliability, and consistency in diagnostic testing.
  35. [35]
    EP07 | Interference Testing in Clinical Chemistry - CLSI
    Apr 30, 2018 · EP07 describes procedures to screen potential interferents, quantify interference effects, and confirm interference in patient samples.
  36. [36]
    Verification of quantitative analytical methods in medical laboratories
    It includes an assessment of precision, trueness, analytical sensitivity, detection limits, analytical specificity, interference, measuring range, linearity, ...
  37. [37]
    CLSI EP15-A3: verification of precision and estimation of bias
    Precision should be tested with two or more sample materials at different medical decision point concentrations. The experiment produces at least 25 replicates ...
  38. [38]
    EFLM Biological Variation Database
    EFLM Biological Variation Database · Number of Meta-Analysis in Database. 191 · Number of Biological Variation Records. 3282 · Number of Papers Referenced. 596.Filter by MeasurandAll Meta CalculationsAbout EFLM BVDBackground and MethodologySearch Results
  39. [39]
    Desirable Biological Variation Database specifications - Westgard QC
    The 2014 edition of Desirable Specifications for imprecision, inaccuracy, and total allowable error, calculated from data on within-subject and ...
  40. [40]
    Verifying Performance Claims for Medical Laboratory Tests - CLSI
    Sep 1, 2025 · The table below lists Clinical and Laboratory Standards Institute (CLSI) Evaluation Protocol (EP) documents that cover the Verification ...
  41. [41]
    [PDF] Method Validation and Verification - ARUP Laboratories
    Any change in the intended use or change to an assay that could affect performance: • Different sample matrix (urine in a serum assay).
  42. [42]
    The Comparison of Methods Experiment - Westgard QC
    The comparison of methods experiment is critical for assessing the systematic errors that occur with real patient specimens.
  43. [43]
    Diagnostic Testing Accuracy: Sensitivity, Specificity, Predictive ...
    The presentation of diagnostic exam results is often in 2x2 tables. The values within this table can help to determine sensitivity, specificity, predictive ...Missing: contingency | Show results with:contingency
  44. [44]
    Evaluating Assay Precision - PMC - PubMed Central - NIH
    While the term precision relates to the concept of variation around a central value, imprecision is actually what is measured. For a normal distribution the ...
  45. [45]
    [PDF] 2. concepts of accuracy, precision, and statistical control
    Accuracy, the closeness of a measured value to the true value, includes the concepts of bias and precision and is judged with respect to the use to be made of ...
  46. [46]
    [PDF] QUALITY CONTROL FOR SAMPLING AND LABORATORY ANALYSIS
    Precision is a measure of the method's variability when repeatedly applied to a homo- geneous sample under controlled conditions, with- out regard to the ...<|control11|><|separator|>
  47. [47]
    Accuracy in Analysis: The Role of Standard Reference Materials - NIH
    SRMs help bring quality assurance to measurements, validate accuracy, and help researchers assess accuracy of new methods.
  48. [48]
    Optimizing accuracy and precision for point-of-care tests
    ... inaccuracy relative to a reference method or system of 3 % (bias) and average imprecision of 5 % (range = 2.5 – 7.5 %), the average total error will be as ...
  49. [49]
    Sigma metrics as a tool for evaluating the performance of internal ...
    This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory.
  50. [50]
    Sigma Metrics, Total Error Budgets & QC - Bio-Rad
    Sigma values show how many standard deviations fit within error limits. Total error (TE) is a measure of error range, and total error budget (TEB) relates TE ...Missing: x | Show results with:x
  51. [51]
    Calibration – an under-appreciated component in the analytical ...
    Calibration is the cornerstone of any quantitative measurement procedure and an integral part of the daily routines in all laboratories.
  52. [52]
    Calibration Verification: Defining Criteria for Acceptable Performance
    Calibration is the process of testing and adjusting the instrument or test system readout to establish a correlation between the instrument's measurement of the ...
  53. [53]
    ISO 15189:2022 Medical laboratories — Requirements for quality ...
    In stockThis document specifies requirements for quality and competence in medical laboratories. This document is applicable to medical laboratories in developing ...
  54. [54]
    ISO/IEC 17025 — Testing and calibration laboratories - ISO
    ISO/IEC 17025 enables labs to demonstrate competence and valid results, useful for any organization performing testing, sampling or calibration.
  55. [55]
    CAP 15189 Accreditation Program | College of American Pathologists
    Provides accreditation to the ISO 15189:2022 Standard Medical Laboratories—Requirements for quality and competency; Three-year accreditation cycle includes ...
  56. [56]
    Laboratory Accreditation Program | Joint Commission International
    JCI's program focuses on patient-centric care, safety, and quality, with two programs, a flexible process, and expert assessment.
  57. [57]
    Laboratory Accreditation Program | College of American Pathologists
    On-site inspections occur every two years using CAP Accreditation Checklists to assess compliance with program requirements. Participating laboratories can ...Missing: frequency | Show results with:frequency
  58. [58]
    Changes in the New ISO 15189:2022 - ANAB Blog
    Dec 16, 2022 · A significant change in the new ISO 15189 standard is the focus on risk management within a laboratory's quality management. Here is what risk ...Missing: digital tools
  59. [59]
    Revised ISO 15189 Standards Place More Focus on Mitigating Risk ...
    Feb 1, 2023 · The new standard is risk-based and patient-focused and encourages continuous improvement within clinical laboratories.<|control11|><|separator|>
  60. [60]
    Proficiency Testing | College of American Pathologists
    Jul 25, 2025 · Our comprehensive range of proficiency testing (PT)/external quality assessment (EQA) spans 16 disciplines—giving you the tools you need to ...CAP Surveys · e-LAB Solutions Suite · Proficiency Testing/External…
  61. [61]
    College of American Pathologists Surveys Program - NCBI - NIH
    The College's Surveys Program is an ongoing assessment of laboratory technology that provides subscribing laboratories with an evaluation of their own accuracy ...
  62. [62]
    Proficiency Testing | myadlm.org
    Apr 1, 2012 · For each peer group, the mean value is represented by a solid blue circle, and the vertical error bars represent ±2 SD.Missing: metrics | Show results with:metrics
  63. [63]
    RCPAQAP First Combined Measurement and Reference Interval ...
    The liquid serum chemistry and reference interval survey included all analytes from the RCPAQAP General Serum Chemistry and Therapeutic Drugs Program which are ...
  64. [64]
    RCPA QAP chemical pathology - ScienceDirect.com
    The new analytes which were introduced in 2009 included red cell folate (from haematology), whole blood HbA1c, plasma metanephrines and point of care testing ...Missing: sample | Show results with:sample
  65. [65]
    Proficiency Testing/External… | College of American Pathologists
    The CAP PT/EQA is designed to assess how well laboratories perform direct analyte measurements. Oxygen saturation (calculated) is derived from a calculation ...
  66. [66]
    Interpretation of EQA results and EQA-based trouble shooting - PMC
    An alternative is to use the mean (or median) of the peer-group (see below) means (or medians) in order to give the same weight to each peer-group (21). A ...
  67. [67]
    Analytical Performance Specifications - RCPAQAP
    Sep 22, 2025 · The RCPAQAP delivers programs that provide commutable and non-commutable samples. Programs with commutable samples provide assigned targets ...
  68. [68]
    Causes of Unsatisfactory Performance in Proficiency Testing
    Aug 9, 2025 · General reasons for PT failures reportedly are calibration errors, reportable range, instability, component failure, method bias, or ...
  69. [69]
    Clinical Laboratory Improvement Amendments of 1988 (CLIA ...
    Jul 11, 2022 · This final rule updates proficiency testing (PT) regulations under the Clinical Laboratory Improvement Amendments of 1988 (CLIA) to address current analytes.