Fact-checked by Grok 2 weeks ago

Concordance correlation coefficient

The concordance correlation coefficient (CCC), also known as Lin's concordance correlation coefficient, is a statistical measure that evaluates the agreement between two quantitative methods or observers measuring the same continuous variable, by simultaneously accounting for both (reproducibility around the ) and accuracy (closeness to the ). Introduced in 1989 by I.-K. Lin, it addresses limitations of the , which assesses linear association but ignores systematic biases in location (s) and scale (variances), making the CCC particularly suitable for applications in assessment, method validation, and studies across fields like , bioassays, and . The CCC is mathematically defined as \rho_c = \frac{2\rho \sigma_X \sigma_Y}{\sigma_X^2 + \sigma_Y^2 + (\mu_X - \mu_Y)^2}, where \rho is the between the two measurements X and Y, \sigma_X and \sigma_Y are their standard deviations, and \mu_X and \mu_Y are their means; for a sample of n paired observations, the estimator is \hat{\rho}_c = \frac{2s_{XY}}{( \bar{Y} - \bar{X} )^2 + s_Y^2 + s_X^2}, with s_{XY} as the sample covariance and s_X^2, s_Y^2 as the sample variances. This formula yields a value ranging from -1 (indicating perfect inverse agreement) to +1 (perfect agreement along the 45-degree line through the origin), with 0 representing no agreement beyond random chance; the measure cannot exceed the of \rho and approaches 1 only when both high and minimal are present. In practice, the CCC is applied to validate new diagnostic tools against gold standards, evaluate observer consistency in clinical trials, and compare measurement techniques in laboratory settings, often complemented by visualization methods like Bland-Altman plots to assess limits of agreement. Interpretation thresholds for agreement strength include values greater than 0.99 for almost perfect concordance, 0.95–0.99 for substantial, 0.90–0.95 for moderate, and below 0.90 for poor, though these can vary by context such as field-specific standards in . Extensions of the CCC have been developed for repeated measures, multiple raters, and robust estimation under non-normal data, enhancing its utility in complex study designs while maintaining computational simplicity with as few as 10 observations.

Introduction

Overview

The concordance correlation coefficient (CCC) is a statistical index that quantifies the degree of agreement between two sets of continuous measurements by evaluating how closely the paired observations deviate from the 45-degree line through the origin in a scatterplot. Introduced as a measure of , it assesses both the (closeness to the best-fit line) and accuracy (closeness to the identity line) of the measurements. The is widely applied to evaluate the interchangeability of methods, raters, or , particularly in fields requiring reliable replication of continuous data, such as validation and calibration. Unlike the , which measures only linear association and ignores systematic differences in location or scale, the incorporates penalties for bias (location shifts) and imprecision (scale differences), offering a more robust indicator of true agreement. In medical research, for instance, the CCC is commonly used to compare a novel diagnostic test against a gold-standard method, ensuring the new approach not only associates with but accurately reproduces the reference values for clinical decision-making.

Historical Development

The concordance correlation coefficient (CCC) was introduced by Lawrence I.-K. Lin in 1989 through his seminal paper titled "A concordance correlation coefficient to evaluate reproducibility," published in the journal Biometrics. This work proposed the CCC as a unified measure of reproducibility for paired measurements, emphasizing its ability to simultaneously capture both precision and accuracy in assessments of agreement. Lin's development was motivated by the shortcomings of the in method comparison studies, particularly within , where traditional correlations often overlooked deviations from perfect agreement (i.e., points not falling on the 45-degree line through the origin). The CCC addressed this by integrating location and scale shifts into a single index, making it more suitable for evaluating in clinical and laboratory settings. Subsequent advancements included extensions to scenarios involving multiple observers, detailed in a 2002 paper co-authored by , which introduced the overall CCC (OCCC) to quantify interobserver agreement while accounting for variability among fixed raters. By the early , practical implementations of the CCC had emerged in statistical software, such as macros for and functions in , facilitating its computation for repeated measures and broader applications in statistical analysis. The gained widespread adoption in throughout the 1990s, becoming a standard tool for agreement studies in fields like validation and . Lin's original 1989 paper has since amassed over 7,900 citations, reflecting its enduring influence and high impact in statistical practice as of 2025.

Mathematical Formulation

Core Formula

The concordance correlation coefficient (CCC), denoted as \rho_c, quantifies the agreement between two continuous variables X and Y, where X and Y represent paired observations from n samples, such as measurements on the same subjects by different methods. The population parameters include the means \mu_X = E[X] and \mu_Y = E[Y], the variances \sigma_X^2 = \mathrm{Var}(X) and \sigma_Y^2 = \mathrm{Var}(Y), and the covariance \sigma_{XY} = \mathrm{Cov}(X, Y). The core formula for the CCC is given by \rho_c = \frac{2 \sigma_{XY}}{\sigma_X^2 + \sigma_Y^2 + (\mu_X - \mu_Y)^2}. This expression was introduced by to evaluate by measuring both and / accuracy in a single metric. The derivation begins with the \rho = \frac{\sigma_{XY}}{\sigma_X \sigma_Y}, which assesses linear association but ignores deviations in means and variances. To incorporate accuracy, the CCC adjusts for bias in via the term (\mu_X - \mu_Y)^2 and for via the sum of variances \sigma_X^2 + \sigma_Y^2. Specifically, the numerator $2 \sigma_{XY} scales the to emphasize concordance around the 45-degree line through the origin, while the denominator represents the total squared difference E[(X - Y)^2] = (\mu_X - \mu_Y)^2 + \sigma_X^2 + \sigma_Y^2 - 2 \sigma_{XY}, rearranged to isolate the agreement component: \rho_c = \frac{2 \sigma_{XY}}{E[(X - Y)^2] + 2 \sigma_{XY}}. This form ensures \rho_c penalizes both random error (via reduced \sigma_{XY}) and systematic bias (via unequal means or variances). In the special case where \mu_X = \mu_Y and \sigma_X = \sigma_Y, the denominator simplifies to $2 \sigma_X^2, so \rho_c = \frac{2 \sigma_{XY}}{2 \sigma_X^2} = \frac{\sigma_{XY}}{\sigma_X^2} = \rho, reducing the CCC to the Pearson correlation.

Components of Precision and Accuracy

The concordance correlation coefficient (CCC) quantifies agreement between two methods by incorporating both , which assesses the closeness of data points to the line of identity, and accuracy, which evaluates deviations in location and scale between the methods. This decomposition allows researchers to identify specific sources of disagreement, such as systematic or variability mismatches, facilitating targeted improvements in protocols. The CCC, denoted as \rho_c, can be expressed as the product \rho_c = [r](/page/R) \cdot v \cdot C_b, where [r](/page/R) is the measuring linear association, v is the factor (part of ) accounting for similarity in variability, and C_b is the factor (part of accuracy) capturing differences in means. The factor is defined as v = \frac{2 \sigma_x \sigma_y}{\sigma_x^2 + \sigma_y^2}, which equals 1 when the standard deviations of the two variables are identical and decreases otherwise, reflecting how well the spread of data aligns between methods. The factor is given by C_b = \frac{\sigma_x^2 + \sigma_y^2}{\sigma_x^2 + \sigma_y^2 + (\mu_x - \mu_y)^2}, where C_b = 1 indicates no in means. This form highlights that perfect agreement requires v = 1, C_b = 1, and [r](/page/R) = 1. The overall component, combining [r](/page/R) and v, assesses scatter around the 45-degree line of perfect concordance through the origin, analogous to reliability in where consistent linear patterning without is emphasized. Unlike the [r](/page/R) alone, which ignores and , this combined measure penalizes both non-linearity and mismatched variability, providing a more comprehensive view of reproducible agreement. is incorporated via v, while via C_b. To illustrate, consider hypothetical paired data with means \mu_x = 10 and \mu_y = 12, variances \sigma_x^2 = 4 and \sigma_y^2 = 5, and 4.4 (adjusted to ensure feasibility). Here, \sigma_x = 2, \sigma_y \approx 2.236, and r = 4.4 / (2 \times 2.236) \approx 0.984. The scale factor v \approx 2 \times 2 \times 2.236 / (4 + 5) \approx 0.994. The location factor C_b = 9 / (9 + 4) \approx 0.692. Thus, \rho_c \approx 0.984 \times 0.994 \times 0.692 \approx 0.677, matching the direct computation $2 \times 4.4 / 13 \approx 0.677 and demonstrating moderate agreement limited by the mean shift and slight scale difference.

Properties and Interpretation

Range and Scale

The concordance correlation coefficient (CCC) ranges from -1 to 1. A value of 1 signifies perfect agreement between the two measurements, corresponding to all data points lying exactly on the 45° line through the origin; a value of 0 indicates no agreement; and a value of -1 represents perfect inverse disagreement. Guidelines for interpreting the magnitude of the CCC classify values greater than 0.99 as excellent agreement, 0.95 to 0.99 as substantial, 0.90 to 0.95 as moderate, and less than 0.90 as poor. These thresholds emphasize the CCC's combined assessment of precision and accuracy, providing a practical scale for evaluating in fields like biomedical measurements. The CCC exhibits scale and location invariance, remaining unchanged under linear transformations of the measurements such as aX + b (with a > 0), which ensures it is not affected by units of measurement or shifts in scale. However, it is sensitive to nonlinear deviations from the line of equality, highlighting departures from ideal linear concordance. Larger sample sizes improve the precision of the CCC estimate by reducing its sampling variability, though the underlying population CCC value itself is independent of sample size.

Bias Correction

The sample of the concordance correlation coefficient (CCC) tends to underestimate the value, particularly in small samples (e.g., n < 20), due to downward bias in the Pearson correlation component and variability in means and variances. To address this, bias-improved estimators have been proposed, such as replacing the sample Pearson correlation coefficient r in the CCC formula with the Olkin-Pratt estimator r_{OP} = r \frac{n-1}{n - 1.5 + 0.5 r^2 (n-2.5)} (for large n approximation, simpler forms like r \frac{n-1}{n-1.5} are used), which reduces bias while preserving accuracy. Resampling methods like jackknife or bootstrap can also provide bias-corrected estimates and confidence intervals. The jackknife procedure computes the CCC n times by omitting one observation each time, then derives pseudo-values as n \hat{\rho}_c - (n-1) \hat{\rho}_{c(-i)}, with the average of pseudo-values serving as the bias-corrected estimator. These methods enhance reliability in small samples without strong normality assumptions. Such corrections are particularly useful when systematic biases or small sample sizes might otherwise lead to underestimation of agreement in applications like method validation.

Comparisons with Other Measures

Versus Pearson Correlation Coefficient

The Pearson correlation coefficient, denoted as r = \frac{\sigma_{xy}}{\sigma_x \sigma_y}, quantifies the strength and direction of the linear association between two variables but ignores deviations in location (differences in means) and scale (differences in variances). In contrast, the concordance correlation coefficient (CCC) evaluates both association and agreement by integrating precision (closely related to r) with accuracy, which penalizes systematic biases and scale mismatches, thereby providing a more comprehensive measure of how well two methods or measurements align near the line of equality. This distinction is evident in cases of strong linear correlation marred by bias. For example, two thermometers measuring identical temperatures might yield a high Pearson's r of 0.9, suggesting close linear tracking, but if one consistently overreads by an amount equal to the standard deviation (e.g., 1°C offset with a 1°C standard deviation), the CCC drops to approximately 0.6, highlighting inadequate agreement due to the systematic error. Mathematically, |\rho_c| \leq |r|, with equality only when the means are identical and the variances match, ensuring no location or scale bias. For method validation, such as in bioequivalence studies, the CCC is preferred over Pearson's r because it better captures true interchangeability; FDA guidelines explicitly support its use for assessing agreement between analytical methods.

Versus Intraclass Correlation Coefficient

The intraclass correlation coefficient (ICC) is a measure of reliability that assesses the consistency of measurements within groups, such as ratings provided by multiple raters evaluating the same subjects, and it exists in various forms depending on the study design, including ICC(1,1) for evaluating the reliability of ratings from a single rater. In contrast, the concordance correlation coefficient (CCC) evaluates absolute agreement between measurements by quantifying deviations from the identity line, emphasizing both precision (closeness to the line) and accuracy (location relative to the line), whereas the ICC primarily captures relative ranking and consistency among raters; additionally, while the ICC is typically bounded between 0 and 1 for interpreting reliability strength, the CCC can assume negative values when both precision and accuracy are poor. For the specific case of two raters under a balanced design without rater-subject interaction, the CCC approximates the ICC(2,1), a two-way random effects model for absolute agreement using single ratings. However, the CCC is often preferred over the ICC in method comparison studies because it more directly incorporates corrections for both location and scale biases without relying on ANOVA assumptions. For example, in inter-rater scoring scenarios, an ICC of 0.8 may indicate strong relative consistency in ordering subjects, but if the raters exhibit a systematic offset in absolute scores (e.g., one consistently rates higher by a fixed amount), the CCC could drop to 0.5, revealing poorer absolute agreement due to the bias.

Estimation and Statistical Inference

Point Estimation Methods

The point estimate of the concordance correlation coefficient (CCC) is computed from a sample of n paired observations (x_i, y_i) for i = 1, \dots, n. First, the sample means are calculated as \hat{\mu}_x = \frac{1}{n} \sum_{i=1}^n x_i and \hat{\mu}_y = \frac{1}{n} \sum_{i=1}^n y_i. Next, the sample variances and covariance are determined using the method-of-moments estimators as in the original formulation: s_x^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \hat{\mu}_x)^2, \quad s_y^2 = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{\mu}_y)^2, \quad s_{xy} = \frac{1}{n} \sum_{i=1}^n (x_i - \hat{\mu}_x)(y_i - \hat{\mu}_y). A bias-corrected variant uses division by n-1 for these terms to adjust for finite-sample bias in the variance components, though the standard estimator follows Lin's original division by n. The sample CCC is then obtained by substituting these statistics into the formula: \hat{\rho}_c = \frac{2 s_{xy}}{s_x^2 + s_y^2 + (\hat{\mu}_x - \hat{\mu}_y)^2}. This estimator combines measures of precision and location shift, with values approaching 1 indicating near-perfect agreement. Software implementations typically use the standard (/n) estimator. In R, the DescTools package's CCC() function computes Lin's sample CCC using division by n.
r
library(DescTools)
x <- c(1, 2, 3, 4, 5)  # Example data
y <- c(1.1, 2.2, 2.9, 4.1, 5.0)
ccc_result <- CCC(x, y)
print(ccc_result)
This outputs the estimate along with precision and accuracy components. In Python, the CCC can be computed using NumPy for the statistics, implementing the standard estimator:
python
import numpy as np

def concordance_correlation_coefficient(x, y):
    x = np.array(x)
    y = np.array(y)
    n = len(x)
    mu_x = np.mean(x)
    mu_y = np.mean(y)
    s_x2 = np.var(x, ddof=0)  # Standard variance (divide by n)
    s_y2 = np.var(y, ddof=0)
    s_xy = np.cov(x, y, ddof=0)[0, 1]  # Standard covariance
    rho_c = (2 * s_xy) / (s_x2 + s_y2 + (mu_x - mu_y)**2)
    return rho_c

# Example
x = [1, 2, 3, 4, 5]
y = [1.1, 2.2, 2.9, 4.1, 5.0]
print(concordance_correlation_coefficient(x, y))
This approach mirrors the manual steps using the conventional definition.

Confidence Intervals and Hypothesis Testing

Confidence intervals for the (CCC), denoted \hat{\rho}_c, are typically constructed using asymptotic approximations or resampling methods to account for sampling variability. A common approach involves Fisher's z-transformation applied to \hat{\rho}_c, which stabilizes the variance and facilitates normal approximation for interval estimation. The transformation is given by \hat{\lambda} = \tanh^{-1}(\hat{\rho}_c) = \frac{1}{2} \ln \left( \frac{1 + \hat{\rho}_c}{1 - \hat{\rho}_c} \right). For CCC, the asymptotic variance of \hat{\lambda} is more precisely estimated as S_{\hat{\lambda}}^2 = \frac{1 + \rho_c^2 - 2\rho_c \cdot C_b + u^2(1 - \rho^2)}{(n-2)(1 - \rho_c^2)} (or sample analogue), where \rho is the , C_b is the bias correction factor, and u measures location shift; a simple approximation analogous to Pearson's is \frac{1}{n-2}, but the full formula is recommended for accuracy. The confidence interval is then \tanh(\hat{\lambda} \pm z_{\alpha/2} \sqrt{S_{\hat{\lambda}}^2}). This method provides reliable coverage for moderate to large samples. For datasets where normality assumptions may not hold or when the sample size is small, non-parametric bootstrap methods offer a robust alternative for confidence interval construction. The percentile bootstrap involves resampling pairs (X_i, Y_i) with replacement to generate B (e.g., 1000) bootstrap samples, computing \hat{\rho}_c^{(b)} for each, and taking the 2.5th and 97.5th percentiles of the empirical distribution as the 95% confidence interval. This approach is particularly useful for non-normal data or when bias and location shifts are present, as it does not rely on parametric assumptions and captures the full variability of the estimator. Software implementations often include bias-corrected accelerated (BCa) variants to improve accuracy. Hypothesis testing for the CCC typically focuses on assessing departure from no agreement, with the null hypothesis H_0: \rho_c = 0 against the alternative H_a: \rho_c > 0. Under H_0, the test statistic based on Fisher's z-transformation follows approximately a asymptotically, allowing a : z = \hat{\lambda} / \sqrt{S_{\hat{\lambda}}^2}, where rejection occurs if z > z_{\alpha}. For exact or small-sample , methods derived from the moments of the CCC can be employed, including one-sided tests for in validation contexts. These procedures build on the point estimate \hat{\rho}_c to evaluate . Power considerations for CCC inference emphasize adequate sample sizes to achieve reliable intervals and detection of meaningful . In biomedical applications, where variability can be high, sample sizes n > 50 are recommended to ensure stable variance estimates and coverage probabilities close to nominal levels, particularly when using asymptotic methods. Smaller samples may lead to wide intervals or low power, especially for \rho_c near 0 or 1.

Applications

In Method Comparison Studies

The concordance correlation coefficient (CCC) is widely applied in method comparison studies to evaluate the agreement between a new measurement method and an established reference standard, particularly when assessing interchangeability in clinical or laboratory settings. For instance, in laboratory assays for biomarkers or diagnostic tests, the CCC quantifies both the and accuracy of the new method relative to the gold standard, helping determine if results are sufficiently concordant for practical use. This metric is frequently integrated with Bland–Altman plots, which graphically depict the differences between paired measurements against their means to identify systematic bias or heteroscedasticity, while the CCC condenses the overall level of agreement into a single interpretable value ranging from -1 to 1. In pharmaceutical validation, the has been utilized in studies to assess agreement between generic and reference drug formulations, focusing on individual-level concordance rather than averages. For example, in evaluations of test drugs against references, the CCC is interpreted using general thresholds such as exceeding 0.95 for substantial agreement. Compared to the limits of agreement derived from Bland–Altman analysis, which emphasize the range within which 95% of differences fall, the CCC offers a unified summary statistic that simultaneously corrects for location and scale discrepancies, making it advantageous for providing a holistic measure of concordance without requiring separate bias and variance assessments. A post-2020 application highlights the CCC's utility in validating AI-based tools against human experts, such as in radiation therapy planning for glioblastoma, where AI decision support improved inter-radiologist agreement on tumor volume measurements to a CCC of 0.86 from 0.83 without AI, aiding precise tumor sizing for treatment.

In Reliability Assessment

The concordance correlation coefficient (CCC) is widely applied in inter-rater reliability assessments to evaluate the agreement among multiple observers rating continuous outcomes on shared scales, capturing both precision and accuracy in their judgments. For instance, in pain assessment for individuals with advanced , an electronic tool incorporating facial analysis demonstrated excellent with a CCC of 0.92 (95% : 0.85–0.96) across 76 evaluations, indicating strong consistency between raters during rest and movement conditions. In studies of , the quantifies the of repeated measures over time, particularly for tracking intra-subject variance in longitudinal designs. This is valuable for assessing consistency, where the helps determine how well measurements reflect true biological signals rather than measurement error. An example from a longitudinal on showed a of 0.83 (95% : 0.73–0.89) for left hippocampal volume across MRI scanners, highlighting its utility in monitoring structural over time. In , the CCC evaluates the consistency of sensor data replicates, ensuring reliable monitoring of natural systems. For pH measurements in citrus fruit juice, open-source loggers have been validated against reference meters, achieving post-calibration CCC values of 0.983–0.997, which confirm high for environmental applications like citrus quality assessment. To provide a comprehensive reliability profile, guidelines recommend reporting the CCC in conjunction with variance components analysis, which decomposes total variability into between-subject, within-subject, and error components, enabling a fuller interpretation of measurement precision in repeated or multi-rater settings.

Limitations and Considerations

Key Assumptions

The concordance correlation coefficient (CCC) relies on the assumption that the paired observations exhibit a linear relationship, as it quantifies agreement relative to the 45-degree line through the origin (y = x). This is essential because the CCC decomposes agreement into and accuracy components, where deviations from linearity can inflate the term—particularly the location and scale deviations—resulting in an underestimation of concordance even if the data show strong overall association. Non-linear patterns, such as or relationships, violate this assumption and may necessitate alternative measures of agreement. A core assumption is that the paired observations are , meaning each pair is drawn i.i.d. from the underlying bivariate without between pairs. This holds in simple random samples but can be violated in designs involving clustered , such as multiple measurements per subject or repeated assessments over time, where intra-cluster dependence affects the variance estimates and thus the CCC. Unlike parametric tests for , the CCC does not require the to follow a for its point estimation, providing robustness to departures from in the marginal or joint distributions. This non-parametric nature in allows application to skewed or heavy-tailed without solely for distributional reasons. However, the CCC remains sensitive to outliers, as these can disproportionately inflate the estimated variances and means, skewing both the and components. The CCC assumes that the two variables are measured on commensurable scales, ensuring that the units are comparable for meaningful assessment of location and scale agreement. If the scales differ substantially—such as one in logarithmic units and the other linear—direct application can lead to misleading bias estimates; in such cases, appropriate transformations (e.g., logarithmic for positively skewed data) are needed to align the scales prior to computation.

Potential Misinterpretations

One common misinterpretation of the concordance correlation coefficient (CCC) arises from treating it interchangeably with the Pearson correlation coefficient, overlooking its distinct emphasis on both precision (reproducibility around the mean) and accuracy (closeness to the true value). Unlike the Pearson coefficient, which solely measures linear association without regard to whether observations cluster around the 45-degree line through the origin, the CCC requires high values of both components to indicate strong agreement; for instance, a high Pearson correlation can yield a moderate or low CCC if there is substantial bias. This conflation has led to its frequent misuse as a standalone validation metric in predictive modeling, where users infer strong agreement from correlation alone without dissecting bias contributions. Another pitfall involves overreliance on interpretive thresholds such as values below 0.90 indicating poor agreement and above 0.99 for almost perfect agreement (e.g., McBride, 2005), without considering their contextual variability. These categories serve as guidelines rather than fixed absolutes, as alternative scales (e.g., McBride's more stringent criteria where 0.90–0.95 denotes only moderate agreement) highlight that acceptability depends on the application's stakes, measurement scale, and domain-specific standards, potentially leading to inconsistent or erroneous judgments across studies. Estimates of the CCC can also be unstable with small sample sizes, as the inherent in its calculation amplifies variability from limited data, resulting in imprecise or overly optimistic point estimates that mislead about agreement strength. To mitigate this, confidence intervals should always accompany CCC reports, providing a of plausible values and revealing the estimate's reliability, particularly when sample sizes are below 50. A recent has highlighted limitations of the in environmental and spatial modeling contexts, such as validation, noting its to variance which can produce misleadingly high values in heterogeneous datasets and prevent reliable cross-study comparisons, its inability to distinguish from contributions, and insensitivity to error size, yielding false assurances of model validity despite poor . Originally designed for studies, the is insufficient as a standalone in such applications.

References

  1. [1]
    A concordance correlation coefficient to evaluate reproducibility
    A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989 Mar;45(1):255-68. Author. L I Lin. Affiliation. 1 Baxter Healthcare ...
  2. [2]
    [PDF] Lin's Concordance Correlation Coefficient - NCSS
    Lin's concordance correlation coefficient (CCC) measures the agreement between a new test (Y) and a gold standard test (X), ranging from -1 to 1.
  3. [3]
    User's guide to correlation coefficients - PMC - NIH
    Lin's concordance correlation coefficient (ρc) is a measure which tests how well bivariate pairs of observations conform relative to a gold standard or another ...
  4. [4]
    The Concordance Correlation Coefficient for Repeated Measures ...
    The concordance correlation coefficient (CCC) is an index that is commonly used to assess the degree of agreement between observers on measuring a continuous ...Missing: definition | Show results with:definition
  5. [5]
    A Concordance Correlation Coefficient to Evaluate Reproducibility
    In an assay validation or an instrument validation process, the reproducibility of the measurements from trial to trial is of interest.
  6. [6]
    [PDF] A concordance correlation coefficient to evaluate reproducibility.
    A concordance correlation coefficient to evaluate reproducibility. · L. Lin · Published in Biometrics 1 March 1989 · Mathematics, Medicine.
  7. [7]
    Overall concordance correlation coefficient for evaluating agreement ...
    In this article, we present an overall CCC (OCCC) in terms of the interobserver variability for assessing agreement among multiple fixed observers.
  8. [8]
    Computer programs for the concordance correlation coefficient
    The CCC macro is presented for computation of the concordance correlation coefficient (CCC), a common measure of reproducibility.
  9. [9]
  10. [10]
    [PDF] A Proposal for StrengthóofóAgreement Criteria for Lin›s ...
    Linss concordance correlation coefficient (Lin. 1989, 2000; Zar 1996; McBride 2003a; McBride 2005) has emerged as the best measure of agreement for two methods ...
  11. [11]
    A bias improved estimator of the concordance correlation coefficient
    Apr 28, 2020 · To correct the bias, the correlation corrected for range restriction, r(c), has been recommended, and a standard formula based on asymptotic ...
  12. [12]
    A short note on jackknifing the concordance correlation coefficient
    Jul 31, 2013 · Lin's concordance correlation coefficient (CCC) is a very popular scaled index of agreement used in applied statistics.Missing: variance | Show results with:variance
  13. [13]
    Factors influencing the accuracy of non-invasive blood pressure ...
    Jun 18, 2019 · Although concordance correlation coefficients between non-invasive and invasive BP were relatively high and the mean bias were relatively low, ...
  14. [14]
    [PDF] M10 BIOANALYTICAL METHOD VALIDATION AND STUDY ... - FDA
    Other methods appropriate for assessing agreement between two methods (e.g., concordance correlation coefficient) may be used too. Alternatively, the ...
  15. [15]
  16. [16]
    [PDF] Lin's Concordance Correlation Coefficient - NCSS
    The statistic, ρc , is an index of how well a new test or measurement (Y) reproduces a gold standard test or measurement (X). It quantifies the agreement ...<|separator|>
  17. [17]
    [PDF] The Assessment of the Degree of Concordance Between the ...
    A goodness-of-fit statistic called the concordance correlation coefficient (CCC) was introduced by Lin in. 1989. The concordance correlation coefficient (Lin, ...
  18. [18]
    [PDF] Assessing Individual Agreement
    • Concordance correlation coefficient (CCC). CCC = 1 −. E(Yi1 − Yi2). 2. E(Yi1 ... bioequivalence studies that assess agreement between a test drug ...
  19. [19]
    Assessing Individual Agreement - Taylor & Francis Online
    Interchangeability between methods is similar to bioequivalence between drugs in bioequivalence ... A Concordance Correlation Coefficient to Evaluate ...
  20. [20]
    A novel graphical evaluation of agreement - PMC - PubMed Central
    Feb 20, 2022 · The Bland-Altman plot with the limits of agreement has been widely used as an absolute index for assessing test-retest reliability or ...
  21. [21]
    Artificial intelligence (AI)-based decision support improves ...
    In contrast, for patients with glioblastoma the CCC was 0.83 (95% CI = 0.75–0.92) without AI-based decision support, as compared to 0.86 (95% CI = 0.78–0.93) ...
  22. [22]
    Statistical Methods for the Analysis of Inter-Reader Agreement ... - NIH
    ... concordance correlation coefficient (CCC) are commonly employed [1]. In a ... Gwet KL. Computing inter-rater reliability and its variance in the presence of high ...
  23. [23]
    A novel pain assessment tool incorporating automated facial analysis
    A novel pain assessment tool incorporating automated facial analysis: interrater reliability in advanced dementia ... Lin's concordance correlation coefficient ...
  24. [24]
    longitudinal changes in molecular, structural and cognitive biomarkers
    Oct 9, 2018 · A gap therefore exists with longitudinal studies examining multiple biomarkers ... concordance correlation coefficient (CCC) on the raw data is ...
  25. [25]
    Formal Assessment of Agreement and Similarity between an Open ...
    SEN0169 (Figure 1) is an analog pH meter, specifically designed for Arduino and Arduino-compatible microcontrollers. ... The concordance correlation coefficient ...
  26. [26]
  27. [27]
    A Robust Permutation Test for the Concordance Correlation ...
    Jan 25, 2021 · Lin (1989)6 proposed the concordance correlation coefficient (CCC) ... Fisher's Z test shows a faster conver- gence to 0.05. In fact ...<|control11|><|separator|>