Fact-checked by Grok 2 weeks ago

Deming regression

Deming regression is a statistical technique for fitting a straight line to in which both the predictor and response variables are measured with , serving as an errors-in-variables model that generalizes by accounting for uncertainty in both dimensions. Named after statistician , who popularized the method in his 1943 book Statistical Adjustment of Data, it builds on earlier work by R. J. Adcock, who proposed the general concept in 1878, and C. H. Kummell, who refined it in 1879. The approach assumes a known λ of the variances in the two variables (typically λ = σ_ε² / σ_δ², where ε and δ denote in the x- and y-directions, respectively) and minimizes the weighted sum of squared perpendicular distances from data points to the line, yielding estimators for the slope β₁ and intercept β₀ as b₁ = [s_y² - λ s_x² + √((s_y² - λ s_x²)² + 4 λ r² s_x² s_y²)] / (2 r √(s_x² s_y²)) and b₀ = ȳ - b₁ x̄, where s_x², s_y², and r denote the variances and . Unlike traditional , which assumes error only in the response variable and can produce biased estimates when the predictor is noisy, Deming regression provides more robust estimates and is particularly valuable for method-comparison studies, such as evaluating agreement between two analytical techniques in or laboratory assays. It supports both ordinary (unweighted) and weighted variants to handle heteroscedasticity, where error variances may vary with measurement magnitude, and includes diagnostic tools like confidence intervals for the regression line, tests for equality to 1 (indicating no proportional ), and assessments of intercept significance (indicating constant ). The method's revival in the late 20th century, notably through refinements by Kristian Linnet for proportional errors, has made it a standard in fields requiring precise and validation, including , , and bioinformatics. Despite its advantages, Deming regression requires accurate specification of the error ratio λ, and misspecification can lead to suboptimal performance, prompting extensions like generalized Deming models or for .

Introduction

Definition and Purpose

Deming regression is a for fitting a straight line to where measurement errors affect both the independent variable (x) and the dependent variable (y), unlike ordinary which assumes errors only in y. This approach minimizes the sum of the squared distances from each point to the fitted line, with distances weighted inversely by the variances of the errors in x and y. The primary purpose of Deming regression is to provide unbiased estimates of the and intercept in scenarios where both variables are imprecise, addressing the bias introduced by ordinary when errors in x are ignored, such as of the toward zero. It is particularly valuable in fields requiring accurate despite inaccuracies, including and validation of analytical procedures. A central in this method is the ratio of the error variances, denoted as \lambda = \sigma_x^2 / \sigma_y^2, which determines the weighting of the distances and influences the of the line. For instance, in , Deming regression is applied to compare measurements from two devices, such as a reference method and a test instrument, without assuming one is error-free, thereby yielding reliable assessments of and between them.

Historical Development

The origins of Deming regression trace back to the late , when statisticians began addressing the limitations of ordinary by accounting for measurement errors in both variables. In 1878, Robert J. Adcock introduced the foundational approach in his paper "A Problem in ," proposing a method to fit a line by minimizing the sum of squared perpendicular distances from data points to the line, under the assumption of equal error variances in both coordinates. This innovation marked the first systematic treatment of what would later be recognized as an errors-in-variables technique, though it initially received limited attention. The following year, C. H. Kummell extended Adcock's work in "Reduction of Observation Equations Which Contain More than One Observed ," generalizing the model to allow for unequal error variances via a known \lambda = \sigma_x^2 / \sigma_y^2 (noting that some historical accounts use the inverse ), thereby linking it more explicitly to practical observational data adjustments. These early contributions laid the groundwork for handling correlated errors, but the method saw sporadic further development by statisticians like T. C. Koopmans in 1937 before fading into relative obscurity amid the dominance of simpler regression techniques. Deming regression gained prominence in the mid-20th century through the efforts of , a pioneering in and . Deming detailed and advocated for the method in his 1943 book Statistical Adjustment of Data, where he presented it as a robust adjustment procedure for data with errors in multiple measurements; the approach later became known as Deming regression in his honor. His emphasis on its application in and sampling theory helped popularize it beyond academic circles. Following Deming's influence, the technique was increasingly incorporated into errors-in-variables frameworks during the and , particularly in fields like and where precise error propagation was essential. By the 1980s, connections to emerged, with Sabine Van Huffel and Joos Vandewalle advancing computational solutions for more complex cases in their 1991 book The Total Least Squares Problem: Computational Aspects and Analysis, solidifying its role in modern statistical modeling.

Mathematical Model

Core Assumptions

Deming regression relies on several foundational statistical assumptions to ensure the validity of its estimates when both measurement variables contain errors. These assumptions underpin the model's ability to account for uncertainties in paired observations, distinguishing it from ordinary by treating both variables symmetrically in the error structure. The primary assumption is , positing that the true underlying between the variables is a straight line, given by y = \beta x + \alpha, with additive errors perturbing the observed values. This requires that the systematic component of the is linear across the range of data, without curvature or nonlinear transformations needed. Observations are assumed to be , meaning that the errors for each pair do not influence others, and the subjects or samples are selected randomly from a larger . Within each pair, the errors in the x and y variables are uncorrelated, though the model allows for potential in true values across pairs as part of the linear relationship. Homoscedasticity is another key requirement, stipulating that the error variances for both x (\sigma_x^2) and y (\sigma_y^2) remain constant across the full range of the data, or at least proportional to the true values in generalized forms. This constant variance ensures that the influence of errors does not vary systematically with the magnitude of the measurements. The model further assumes that the error variance ratio \lambda = \sigma_x^2 / \sigma_y^2 is known or can be reliably estimated, often from replicate measurements on the same samples. If \lambda is unknown or misspecified, the estimates of the and intercept become sensitive, potentially leading to biased results, though the method remains preferable to in such cases. Finally, the errors are typically assumed to follow a with zero , which supports the asymptotic of the estimators and facilitates such as intervals. However, Deming demonstrates robustness to mild deviations from , maintaining reasonable performance under non-Gaussian error distributions as long as other assumptions hold.

Formulation and Error Structure

In the Deming regression model, the observed data points are modeled within an errors-in-variables framework, where the measurements x_i and y_i for i = 1, \dots, n are given by x_i = \xi_i + \varepsilon_i and y_i = \eta_i + \delta_i. Here, (\xi_i, \eta_i) represent the true underlying values that lie exactly on the linear relationship \eta = \alpha + \beta \xi, while \varepsilon_i and \delta_i denote the random measurement errors. The errors are characterized as independent and normally distributed with zero means and constant variances: \varepsilon_i \sim N(0, \sigma_x^2) and \delta_i \sim N(0, \sigma_y^2), with no correlation between them, \operatorname{Cov}(\varepsilon_i, \delta_i) = 0. This structure acknowledges measurement inaccuracies in both variables, distinguishing Deming regression from ordinary , which assumes error only in the dependent variable. The ratio \lambda = \sigma_x^2 / \sigma_y^2 quantifies the relative error variances and influences the weighting in the estimation. To estimate the parameters \alpha and \beta, the model minimizes the sum of squared weighted perpendicular distances from the observed points to the fitted line: \sum_{i=1}^n \frac{(y_i - \alpha - \beta x_i)^2}{\sigma_y^2 + \beta^2 \sigma_x^2}. This objective function reflects the of the errors, treating deviations to the line in a manner scaled by the combined error variances. Equivalently, incorporating \lambda, the loss can be expressed (up to a proportionality constant) as L(\alpha, \beta) = \sum_{i=1}^n \frac{(y_i - \alpha - \beta x_i)^2}{1 + \lambda \beta^2}, which simplifies the perpendicular residual structure for computational purposes; notably, as \lambda \to 0 (no error in x), it recovers the ordinary least squares criterion.

Estimation Methods

Deming's Solution Procedure

Deming regression parameters are estimated by minimizing an objective function that accounts for errors in both variables, specifically the sum of squared residuals weighted by the error variance ratio λ = σ_x² / σ_y², where σ_x² and σ_y² are the variances of the measurement errors in the x and y variables, respectively. This minimization leads to a quadratic equation in the slope β, with the closed-form solution given by \beta = \frac{S_{yy} - \lambda S_{xx} + \operatorname{sign}(S_{xy}) \sqrt{(S_{yy} - \lambda S_{xx})^2 + 4\lambda S_{xy}^2}}{2 S_{xy}}, where S_{xx} = \sum_{i=1}^n (x_i - \bar{x})^2, S_{yy} = \sum_{i=1}^n (y_i - \bar{y})^2, and S_{xy} = \sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y}) are the sums of squares and cross-products of the centered observations, and the sign term ensures the slope aligns with the direction of the covariance. The intercept α is then computed as \alpha = \bar{y} - \beta \bar{x}. To derive this solution, consider the objective function Q(\alpha, \beta) = \sum_{i=1}^n \left[ (y_i - \alpha - \beta x_i)^2 + \lambda (x_i - \hat{x}_i)^2 \right], where \hat{x}_i is the x-value on the fitted line corresponding to y_i, given by \hat{x}_i = (y_i - \alpha)/\beta. Taking partial derivatives with respect to α and β and setting them to zero yields the normal equations; solving these equations results in the above. When λ is unknown, it can be estimated using replicate measurements: for each of n subjects, obtain k_x replicates of x and k_y replicates of y, compute the sample variance of the x-replicates as an estimate of σ_x² and similarly for σ_y², then set λ = σ_x² / σ_y², and use the paired means \bar{x}_i and \bar{y}_i in place of the observations to apply the above formulas. For intervals on the parameters, especially in small samples, bootstrap resampling of the pairs is recommended, as analytical variance formulas (e.g., via jackknife) may require additional or numerical optimization for precision.

Comparison to Ordinary Least Squares

Ordinary least squares (OLS) assumes that the independent variable x is measured without error (\sigma_x = 0), minimizing only the vertical distances from the data points to the fitted line. This assumption leads to attenuation bias when errors are present in both variables, resulting in an underestimation of the absolute value of the , particularly for steeper true slopes. The asymptotic in OLS can be quantified as \beta_{\text{OLS}} \approx \beta_{\text{true}} \cdot \frac{\sigma^2_{\text{true } x}}{\sigma^2_{\text{true } x} + \sigma_x^2}, where \sigma^2_{\text{true } x} is the variance of the true x values; this factor is always less than 1 when \sigma_x > 0, causing the estimated to be biased toward zero. In contrast, Deming regression, when the error variance \lambda = \sigma_x^2 / \sigma_y^2 is correctly specified, yields an asymptotically unbiased estimate \beta_D \approx \beta_{\text{true}}, effectively correcting for the s in x. This makes Deming superior for reducing in scenarios with comparable s in both variables, as OLS underestimates steeper s more severely. Consider hypothetical simulated data generated from a true line y = x with equal additive errors (σ_x = σ_y = 0.3) across 100 points drawn from a on [0, 2] for the true x. The OLS estimates approximately 0.79, attenuating the true of 1.0, while the Deming (with λ = 1) recovers approximately 1.0, demonstrating the correction for symmetric errors. Such simulations highlight how OLS introduces systematic underestimation, whereas Deming aligns closely with the true relationship. Deming regression is preferable when \lambda is not small (i.e., \sigma_x is not much smaller than \sigma_y), as OLS then approximates well and is computationally simpler; otherwise, Deming substantially reduces by accounting for errors in both variables. Confidence intervals for Deming slopes are typically wider than those for OLS due to the additional in estimating the error structure, but they are more accurate asymptotically, better reflecting the true variability when errors in x are non-negligible.

Variants and Extensions

Orthogonal Regression

Orthogonal regression, also known as , minimizes the sum of squared perpendicular distances from observed data points to the fitted line, treating both variables symmetrically by assuming errors in both with equal variances. This method is particularly useful in scenarios where neither variable is considered error-free, such as in or geometric fitting problems. It serves as a special case of Deming regression when the error variance λ equals 1, simplifying the by eliminating the need to specify differing error structures. The slope parameter β in orthogonal regression is obtained by adapting Deming's general solution for the case λ=1, yielding the explicit formula \beta = \frac{S_{yy} - S_{xx} + \sqrt{(S_{yy} - S_{xx})^2 + 4 S_{xy}^2}}{2 S_{xy}}, where S_{xx}, S_{yy}, and S_{xy} denote the sample variance of the x-variable, the sample variance of the y-variable, and the sample covariance, respectively. The intercept is then computed as the value that passes through the data centroid. This closed-form expression avoids iterative procedures required for general λ values in Deming regression. Geometrically, orthogonal regression determines the line by projecting data points onto it such that the residuals are orthogonal (perpendicular) to the line, equivalent to rotating the axes until the sum of squared distances in the new is minimized. This perpendicular focus distinguishes it from vertical or horizontal residual minimizations in other types. In relation to , the fitted line corresponds to the direction of the first principal component when centered data are analyzed, capturing maximum variance under isotropic error assumptions. Key differences from general Deming regression include the absence of a λ weighting term, which assumes equal and isotropic errors rather than allowing for heteroscedasticity or unequal variances. This makes orthogonal regression ideal for applications like image processing or where symmetric errors prevail, but less flexible for measurement comparisons with known variance disparities. Historically, the method predates Deming's contributions, originating in Adcock's 1878 work on for symmetric errors, and it has roots in early multivariate statistical techniques.

York Regression

York regression extends the Deming regression framework to accommodate heteroscedastic measurement errors—where error variances vary across observations—and potential correlations between the errors in the independent and dependent variables. Developed by geophysicist Derek , this method was first introduced in for the uncorrelated case with a known of error variances and later generalized in to handle correlated errors with unequal variances. Unlike the homoscedastic assumptions of the original Deming model, York regression provides a more flexible approach for datasets where measurement precisions differ, such as in analytical techniques with varying sample qualities. The model generalizes the errors-in-variables structure as follows: for each observation i, the measured x_i = \xi_i + \varepsilon_i where \mathrm{Var}(\varepsilon_i) = \sigma_{x_i}^2, and y_i = \eta_i + \delta_i where \mathrm{Var}(\delta_i) = \sigma_{y_i}^2 and \mathrm{Cov}(\varepsilon_i, \delta_i) = \rho_i \sigma_{x_i} \sigma_{y_i}, with \xi_i and \eta_i lying on the true line \eta = \alpha + \beta \xi. This formulation accounts for both the heteroscedasticity in \sigma_{x_i}^2 and \sigma_{y_i}^2 and the correlation coefficient \rho_i, which can be zero if errors are uncorrelated. The estimation proceeds via an iterative maximum likelihood procedure, typically starting with an initial guess for the slope \beta (e.g., from ordinary least squares). Weights are then computed as w_i = \frac{1}{\sigma_{y_i}^2 + \beta^2 \sigma_{x_i}^2 - 2 \beta \rho_i \sigma_{x_i} \sigma_{y_i}}, which represent the inverse of the variance of the orthogonal distance from the point to the line. The slope is updated using the weighted covariance form: \beta = \frac{\sum_i w_i (x_i \bar{y}_i - y_i \bar{x}_i)}{\sum_i w_i (x_i \bar{x}_i - y_i \bar{y}_i)}, where \bar{x}_i = x_i - \bar{x}, \bar{y}_i = y_i - \bar{y}, and \bar{x}, \bar{y} are the (possibly weighted) means. This process repeats, recalculating weights and \beta until convergence, often within a few iterations. The intercept is then \alpha = \bar{y} - \beta \bar{x}. These steps derive from minimizing the weighted sum of squared orthogonal distances, yielding unbiased estimates under the model assumptions. York regression offers greater flexibility for real-world datasets with unequal measurement precisions and error correlations, outperforming simpler methods in scenarios where error structures are known or estimated. It has been widely adopted in for fitting isochrons to radiometric data with varying analytical uncertainties, improving age determinations in isotopic studies. In , it facilitates fitting where both concentrations and instrumental responses carry heteroscedastic errors, enhancing accuracy in quantitative analyses such as atomic emission or .

Practical Aspects

Applications in Measurement Comparison

Deming regression finds primary application in method comparison studies, where it is employed to assess agreement between two measurement methods, such as laboratory instruments analyzing the same , while accounting for errors in both datasets to detect constant and proportional biases. This approach is particularly valuable in for evaluating interchangeability of assays, as it provides unbiased estimates of systematic bias and reliable confidence intervals when both methods exhibit measurement variability. In , Deming regression serves as an alternative to Bland-Altman analysis for validating next-generation sequencing (NGS) assays against reference methods like , especially in error-prone quantitative data. For instance, it has been applied to compare NGS outputs for , revealing biases that simpler methods might overlook. In , it is used to construct curves where instrument errors affect both independent and dependent variables, such as in CD4+ count measurements for monitoring, helping to mitigate biases from censored data below the limit of quantitation. In , Deming regression facilitates the comparison of low-cost pollutant sensors, like Cairpol and Aeroqual devices, against reference instruments during biomass burning plume studies, quantifying collocated precision and for gases such as CO and NO₂. A notable from compared Bland-Altman, Deming, and for evaluating NGS assay accuracy in . The analysis demonstrated Deming regression's superiority in handling error-prone data, as it effectively detected both constant and proportional errors in quantitative NGS values compared to , providing more robust agreement assessments than the other methods. In these applications, a near 1 and intercept near 0 in the Deming regression line indicate that the methods are interchangeable, with confidence bands around the line used to assess prediction reliability and bias significance. Post-2020 developments have integrated Deming regression into quality control for AI-assisted measurements through advanced frameworks, such as a 2024 two-stage approach that first estimates error variances and then fits the model, enhancing accuracy in clinical risk associations like stroke and bleeding predictions in atrial fibrillation patients.

Limitations and Software Implementations

Deming regression is sensitive to the misspecification of the error variance ratio λ, where incorrect specification can introduce substantial bias in the slope estimate, potentially reaching up to two-thirds of the bias observed in ordinary least squares regression under similar conditions. This sensitivity arises because λ directly influences the weighting of errors in both variables, and errors in its estimation—often derived from analytical variances—can propagate to the fitted line, particularly when variances are unequal or poorly estimated. Additionally, the method assumes a linear relationship between variables and independence of measurement errors, which are normally distributed with zero mean; violations, such as nonlinearity or the presence of outliers, can distort results, especially in small datasets where outliers exert disproportionate influence. Standard Deming regression provides closed-form estimators for the slope and intercept, though weighted variants require iterative re-weighting procedures that can increase computational demands for very large datasets. To address these limitations, practitioners often employ bootstrap resampling to construct confidence intervals when normality assumptions fail, providing robust uncertainty estimates without relying on asymptotic approximations. Simulations are also recommended for robustness checks, allowing evaluation of parameter stability under various error structures or λ values through methods that mimic real-world data scenarios. Several software tools facilitate Deming regression implementation, supporting both basic forms and extensions like York regression for correlated errors. In , the 'Deming' package on CRAN provides functions for standard Deming fits, including weighted variants and intervals, with updates as recent as 2023 incorporating improved . The 'IsoplotR' package extends this to York regression via an iterative least-squares that handles correlated errors, though users should monitor convergence warnings, as high error correlations can lead to non-convergence or unstable solutions requiring initial value adjustments or custom scripts. In , Deming regression can be performed using macros within PROC IML or custom implementations since version 9.4 (around 2019), enabling scatter plots and bands for comparison. Python's statsmodels library offers via sm.OLS extensions or the sklearn decomposition module, approximating Deming when λ is set appropriately, though full Deming requires additional scripting for general λ. Commercial options include NCSS software, which features a dedicated Deming regression with assumption diagnostics and detection, and MATLAB's File Exchange contributions for linear Deming fits using optimization toolboxes. Recent open-source advancements include the 2024 MCS software implemented in for comparisons using Deming regression, enhancing its application in biomedical studies.

References

  1. [1]
    [PDF] Deming Regression - NCSS
    Introduction. Deming regression is a technique for fitting a straight line to two-dimensional data where both variables, X and Y, are measured with error.Missing: history primary sources
  2. [2]
    Deming Regression
    Aug 22, 2016 · The general concept was originally proposed by R. J. Adcock (in 1878) and refined by C. H. Kummell in 1879. It was largely overlooked until ...Missing: definition history
  3. [3]
    Statistical Adjustment of Data - William Edwards Deming
    William Edwards Deming. Publisher, J. Wiley & Sons, Incorporated, 1943. Original from, the University of Michigan. Digitized, Jul 10, 2007. Length, 261 pages.
  4. [4]
    Estimation of the Linear Relationship Between the Measurements of ...
    The linear relationship between the measurements of two methods is estimated on the basis of a weighted errors-in-variables regression model.
  5. [5]
    General Deming Regression for Estimating Systematic Bias and Its ...
    In this report, I present an improved method for regression analysis that is free from the usual simplifying assumptions and is generally applicable to linearly ...Missing: highly | Show results with:highly
  6. [6]
  7. [7]
    (PDF) Incorrect Least-Squares Regression Coefficients in Method ...
    Aug 9, 2025 · Unlike ordinary least squares regression, Deming regression minimizes the perpendicular distance from each point to the fitted line, accounting ...
  8. [8]
    [PDF] An Historical Overview of Linear Regression with Errors in both ...
    Kummel derived an estimate of the line which clearly showed the relation between his and Adcock's work. Kummel argued that his assumption of knowing λ was not ...
  9. [9]
    The life and work of Robert J. Adcock. Part I: the figure of the Earth
    Nov 27, 2019 · The American mathematician Robert Jackson Adcock (1826–1895) is an obscure figure, hitherto associated with the history of regression ...
  10. [10]
    Deming Regression
    Feb 24, 2025 · Deming regression is one of the simplest forms of of EIV models promoted by W. Edwards Deming 1 . The first to detail the method were Adcock (1878) followed by ...Missing: primary sources
  11. [11]
    Statistical adjustment of data : Deming, W. Edwards (William ...
    Sep 29, 2020 · Statistical adjustment of data. by: Deming, W. Edwards (William Edwards), 1900-1993 ... On the meaning of adjustment ; Simple illustrations of ...Missing: regression | Show results with:regression
  12. [12]
    [PDF] Deming, Theil-Sen, and Passing-Bablock Regression
    Adcock [1] in 1878 suggested minimizing the sum of squared horizontal + vertical distances to the predicted values. However the idea of Adcock remained largely ...Missing: Kummel | Show results with:Kummel<|control11|><|separator|>
  13. [13]
    Deming Regression Basic Concepts - Real Statistics Using Excel
    One of the assumptions for Deming regression is that the residuals are normally distributed. We test the optimized residuals (range P6:P15) for normality using ...
  14. [14]
  15. [15]
    Least Squares Methods for Treating Problems with Uncertainty in x ...
    Jul 17, 2020 · On the other hand, it is possible to obtain closed-form solutions for the straight-line response function by making simplifying assumptions ...<|control11|><|separator|>
  16. [16]
    Evaluation of linear regression techniques for atmospheric ... - AMT
    Mar 2, 2018 · In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by ...
  17. [17]
    [PDF] Local spatial structure of forest biomass and its consequences ... - BG
    The OLS regression slope was underestimated by an ... through Deming regression; however, this approach requires information on the ratio of the ... 0.1 ha (slope ...<|control11|><|separator|>
  18. [18]
    (PDF) Deming Regression: Least-Squares Analysis with Errors in ...
    Errors-in-Variables (EIV) methods, of which Deming Regression is the most popular, provide a remedy by recognising that both x- and y-data may contain ...
  19. [19]
    Chapter 13 Principal Components Analysis | Linear Algebra for Data ...
    The first principal component points in the direction of a line which minimizes the sum of squared orthogonal distances between the points and the line.<|control11|><|separator|>
  20. [20]
    Use of Deming regression in method-comparison studies - PubMed
    Use of Deming regression in method-comparison studies. ... Chemistry, Clinical / methods*; Humans; Immunologic Techniques / standards ...Missing: applications | Show results with:applications
  21. [21]
    Impact of Using Double Positive Samples in Deming Regression
    Aug 12, 2022 · Simulation studies suggest that the traditional Deming regression which deletes censored observations gives biased estimates and a low coverage, ...
  22. [22]
    Evaluation of Cairpol and Aeroqual Air Sensors in Biomass Burning ...
    We used the Deming regression in this case because of the similar uncertainties in the measurements from both sensors being compared. Because we were comparing ...
  23. [23]
    application of Bland-Altman, Deming and simple linear regression ...
    Jul 26, 2017 · Results: Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in ...
  24. [24]
    [2405.20992] A Novel Two-stage Deming Regression Framework ...
    A novel two-stage Deming regression framework is proposed to address this issue, offering a more accurate tool for analyzing associations between variables.
  25. [25]
    Performance of Deming regression analysis in case of misspecified ...
    Abstract. Application of Deming regression analysis to interpret method comparison data presupposes specification of the squared analytical error ratio (lambda ...
  26. [26]
    [PDF] Estimating Lambda-Term Reduction Complexity with Regression ...
    Disadvantages: the assumption that input and output data have a linear dependency, and sensitivity on outliers, especially on small datasets. Page 3. 149.
  27. [27]
    [PDF] Robust, scalable and fast bootstrap method for analyzing large scale ...
    Apr 12, 2015 · The bootstrap method is known as a consistent method of assigning estimates of uncertainty (e.g., standard deviation, confidence intervals, etc.) ...
  28. [28]
    Robust model selection using the out-of-bag bootstrap in linear ...
    Jun 29, 2022 · In this article, we present a novel approach to robust model selection in linear regression to accommodate the situations where outliers are present in the ...
  29. [29]
    Linear regression of X,Y-variables with correlated errors - R
    York, Derek, et al., 2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72.3, pp.367- ...
  30. [30]
    Deming regression for comparing different measurement methods
    Jan 7, 2019 · Deming regression (also called errors-in-variables regression) is a total regression method that fits a regression line when the measurements of both the ...Missing: history primary sources
  31. [31]
    [PDF] A SAS macro wrapper for an efficient Deming regression algorithm ...
    Feb 10, 2022 · Koopmans in 1936 and Deming in 1943 are credited with expanding the theory for wider use with Deming eventually ... Statistical adjustment of data ...
  32. [32]
    Linear Deming Regression - File Exchange - MATLAB Central
    deming() performs a linear Deming regression to find the linear coefficients: y = b(1) + b(2)*x under the assumptions that x and y both contain measurement ...Missing: software R Python SAS PROC NCSS<|separator|>
  33. [33]
    assessing agreement between estimated fetal weights calculated by ...
    Aug 30, 2024 · Deming regression technique offers the most appropriate regression line by considering the error in both variables. In order to estimate the ...