Fact-checked by Grok 2 weeks ago

White test

The White test, formally known as White's test for heteroskedasticity, is a Lagrange multiplier-based statistical procedure designed to detect heteroskedasticity—non-constant variance—in the error terms of a model. Proposed by Halbert White in as part of a broader framework for handling model misspecification, the test assesses whether the conditional variance of the residuals depends on the explanatory variables without assuming a specific form of heteroskedasticity, making it a general diagnostic tool in . To implement the test, one first estimates the primary model using ordinary (OLS) to obtain the residuals \hat{e}_i. These squared residuals \hat{e}_i^2 are then regressed in an auxiliary model on the original explanatory variables, their squares, and all pairwise cross-products, forming a set of regressors that capture potential nonlinear dependencies. The is computed as n R^2, where n is the sample size and R^2 is the from the auxiliary regression; under the of homoskedasticity ( error variance of the regressors), this statistic converges asymptotically to a with equal to the number of explanatory variables (excluding the intercept) in the auxiliary model, \frac{p(p+3)}{2} for p original explanatory variables. The test's robustness stems from its lack of reliance on normality assumptions for the errors and its ability to identify general forms of heteroskedasticity, though it can consume substantial in models with many regressors and may also flag other forms of misspecification, such as omitted variables. Widely used in , , and social sciences, a rejection of the prompts remedies like (also introduced by White) or to restore valid .

Introduction

Definition

The White test is a statistical diagnostic procedure employed in econometrics to assess the presence of heteroskedasticity in the residuals of an ordinary (OLS) regression model. It functions as a (LM) test, evaluating the of homoskedasticity—under which the variance of the terms remains constant across all observations—against the of general heteroskedasticity, where this variance varies systematically. Heteroskedasticity represents a violation of the standard OLS assumption of homoscedastic errors, potentially leading to biased standard errors and unreliable hypothesis tests if undetected. Developed for application in models involving multiple explanatory variables, the White test examines whether the error variance depends on the regressors themselves, making it suitable for diagnosing issues in analyses common in empirical economic research. Its formulation involves an auxiliary of the squared OLS residuals on transformations of the original regressors, yielding a asymptotically distributed as chi-squared under the . A primary strength of the White test is its flexibility, as it does not presuppose a particular functional form for the heteroskedasticity, instead allowing it to potentially arise from the levels, squares, and cross-products of the regressors. This generality enables detection of a broad class of heteroskedastic patterns without requiring prior specification of the variance structure. The test is named after econometrician Halbert White, who introduced it in his seminal 1980 paper.

Historical Context

The White test for heteroskedasticity was introduced by economist Halbert White in his seminal 1980 paper, "A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity," published in Econometrica. In this work, White proposed both a covariance matrix estimator robust to heteroskedasticity and a corresponding Lagrange multiplier-style test to detect its presence directly, addressing a critical need in regression analysis where standard errors could be unreliable under non-constant variance. The test emerged amid the broader econometric developments of the and , a period marked by increasing emphasis on robust methods and diagnostic tools for linear models. During this era, researchers grappled with the limitations of classical assumptions like homoskedasticity, prompting innovations in corrections and specification testing to enhance the reliability of empirical findings in cross-sectional and time-series data. White's contribution built on prior awareness of heteroskedasticity issues, such as those highlighted in earlier works on , but offered a general, distribution-free approach without requiring a specific form for the variance process. Upon publication, the White test rapidly gained prominence as a standard diagnostic in applied due to its simplicity, asymptotic validity under mild conditions, and effectiveness against general forms of heteroskedasticity. It marked the onset of a new paradigm in robust inference, influencing subsequent advancements in econometric software and practice. As of November 2025, the original had amassed 36,783 citations, underscoring its enduring influence. Over time, the test has been incorporated into comprehensive specification testing frameworks, such as those combining multiple diagnostics for model adequacy, while White himself did not publish significant revisions or extensions to the original formulation.

Theoretical Foundation

Homoskedasticity Assumption

The ordinary (OLS) regression model posits a linear between a dependent variable Y and a set of regressors X, expressed as Y = X\beta + \varepsilon, where \beta represents the unknown parameters and \varepsilon denotes the term capturing unobserved factors. A fundamental assumption of this model is homoskedasticity, which requires that the variance of the error terms remains constant across all values of the regressors. This condition ensures that the spread of residuals does not systematically vary with the level of the independent variables, promoting reliable model diagnostics. Mathematically, homoskedasticity is formalized as \operatorname{Var}(\varepsilon_i \mid X) = \sigma^2 for all observations i, where \sigma^2 is a positive constant denoting the common variance. In the context of OLS, adherence to the homoskedasticity assumption, alongside other conditions of the Gauss-Markov theorem, guarantees that the OLS estimators are unbiased and possess minimum variance among all linear unbiased estimators, thereby qualifying as the best linear unbiased estimators (). This property underpins the efficiency of OLS and validates the standard errors used in t-tests, F-tests, and confidence intervals for .

Implications of Heteroskedasticity

Heteroskedasticity in ordinary (OLS) regression leads to biased estimates of the errors of the coefficient estimators, rendering conventional t-statistics, p-values, and intervals unreliable for testing and . This can result in over- or under-rejection of the , depending on the form of heteroskedasticity, thereby invalidating assessments. For instance, if the variance increases with the level of an explanatory , the errors may be underestimated, leading to inflated t-statistics and a higher likelihood of falsely rejecting true null hypotheses. Although the OLS point estimates remain unbiased and consistent under heteroskedasticity, they are no longer the best linear unbiased estimators (BLUE), as the Gauss-Markov theorem's efficiency property requires homoskedasticity for the minimum variance among linear unbiased estimators. Instead, the estimators exhibit higher variance than potentially available alternatives, such as , reducing the precision of predictions and parameter estimates. This inefficiency means that while the expected value of the estimators is correct, their sampling variability is not optimally minimized, compromising the reliability of regression-based forecasts. Heteroskedasticity commonly arises in cross-sectional data, such as regressions of household food expenditure on , where the variance of expenditures tends to increase at higher levels due to greater variability in spending patterns among wealthier households. In financial models, it is prevalent in analyses of returns, where error variances fluctuate with conditions, such as during periods of high . These patterns violate the constant variance , leading to distorted in empirical studies. The presence of heteroskedasticity has broader implications for econometric , as it can mislead conclusions in models estimating relationships like the impact of on or fiscal multipliers on GDP growth, potentially resulting in flawed recommendations based on erroneous . In models, for example, failing to account for varying variances across groups may overestimate the of determinants like experience or schooling. Similarly, in GDP regressions, heteroskedasticity induced by economic cycles can assessments of effects, affecting decisions on interventions such as stimulus measures.

Test Procedure

Auxiliary Regression Setup

The White test for heteroskedasticity begins with estimating the original ordinary least squares (OLS) regression model. Consider the Y_i = \beta_0 + \sum_{j=1}^k \beta_j X_{ij} + \epsilon_i, where i = 1, \dots, n indexes observations, k is the number of regressors excluding the constant, and \epsilon_i are the error terms assumed to satisfy E(\epsilon_i | \mathbf{X}_i) = 0 and \text{Var}(\epsilon_i | \mathbf{X}_i) = \sigma^2 under homoskedasticity. The OLS estimator \hat{\beta} is obtained by minimizing the sum of squared residuals, yielding fitted values \hat{Y}_i = \hat{\beta}_0 + \sum_{j=1}^k \hat{\beta}_j X_{ij} and residuals \hat{\epsilon}_i = Y_i - \hat{Y}_i. The second step involves constructing an auxiliary using the squared OLS residuals as the dependent to detect potential dependence of the error variance on the regressors. The squared residuals \hat{\epsilon}_i^2 are regressed on their squares X_{ij}^2 and all pairwise cross-products X_{ij} X_{ik} for j < k. This setup allows the test to capture nonlinear forms of heteroskedasticity without prespecifying its functional form. The full auxiliary model is specified as \hat{\epsilon}_i^2 = \gamma_0 + \sum_{m=1}^k \gamma_m X_{im}^2 + \sum_{j=1}^{k-1} \sum_{p=j+1}^k \gamma_p (X_{ij} X_{ip}) + u_i, where u_i is the error term in the auxiliary , and the coefficients \gamma_m, \gamma_p (for m \neq 0, p \neq 0) are expected to be zero under the null hypothesis of homoskedasticity. The inclusion of quadratic and interaction terms in the auxiliary regression ensures robustness to various heteroskedasticity patterns, as the error variance may depend on the curvatures or interactions of the explanatory variables. The number of second-order terms (squares and cross-products) in this auxiliary regression is k(k+1)/2, where k is the number of original regressors excluding the constant; this quantity determines the degrees of freedom for the subsequent test statistic under the null. Omitting redundant cross-products (e.g., X_{ij} X_{ik} = X_{ik} X_{ij}) avoids multicollinearity issues while maintaining completeness.

Computation of Test Statistic

The Lagrange multiplier (LM) test statistic for the White test is computed as LM = n R^2, where n is the sample size and R^2 is the coefficient of determination from the auxiliary regression of the squared ordinary least squares on a constant and the relevant second-order terms of the original regressors. This form arises from regressing the squared \hat{e}_i^2 against an intercept and the cross-products and squares of the explanatory variables (excluding the intercept), yielding R^2 as the measure of explained variation in residual variance. Under the null hypothesis of homoskedasticity, the LM statistic follows an asymptotic chi-squared distribution with degrees of freedom equal to the number of regressors in the auxiliary regression, which is typically k(k+1)/2 for a model with k original regressors (accounting for the quadratic and interaction terms). An equivalent matrix form of the statistic is LM = n D_n(\hat{\beta}_n, \hat{\sigma}_n)' B_n D_n(\hat{\beta}_n, \hat{\sigma}_n), where D_n captures deviations in squared residuals scaled by second-moment regressor products, and B_n is a consistent estimator of the relevant covariance; however, the n R^2 computation is preferred for its simplicity and direct applicability in practice. In implementation, only the R^2 from the auxiliary regression is required to obtain the LM statistic, without needing to estimate or interpret the coefficients of the auxiliary model beyond assessing the overall fit. Rejection of the null occurs when this R^2 (and thus LM) exceeds the critical value from the chi-squared distribution, indicating evidence of heteroskedasticity.

Interpretation and Decision

Hypotheses

The White test for heteroskedasticity in linear regression models is framed as a test of two competing hypotheses regarding the variance of the error terms. The null hypothesis H_0 posits homoskedasticity, stating that the conditional variance of the errors is constant across all observations, i.e., \operatorname{Var}(\varepsilon_i \mid X_i) = \sigma^2 for all i, where \varepsilon_i are the error terms, X_i are the regressors, and \sigma^2 is a fixed positive constant. This assumption implies that the errors are homoskedastic and independent of the explanatory variables, ensuring that the usual ordinary least squares (OLS) covariance matrix estimator is consistent. The alternative hypothesis H_1 asserts the presence of heteroskedasticity, where \operatorname{Var}(\varepsilon_i \mid X_i) \neq \sigma^2 and may vary with the regressors X_i in an unspecified manner. Under H_1, the error variance is not constant, potentially leading to inefficient standard errors and invalid inference in OLS estimation. As a Lagrange multiplier (LM) test, the White test evaluates the joint significance of the coefficients on the non-constant terms in an auxiliary regression of squared OLS residuals on powers and cross-products of the regressors, specifically testing whether these coefficients \gamma_j = 0 for all j corresponding to non-intercept terms. Rejection of H_0 occurs if the test statistic, based on this joint test, indicates statistical significance. A key feature of the White test's hypotheses is their generality: the alternative does not impose a specific parametric form on the heteroskedasticity, such as quadratic dependence on a single variable, distinguishing it from tests like the that assume a particular structure. This nonparametric approach to the alternative makes the test robust to various patterns of variance heterogeneity.

Critical Values and P-Values

The decision rule for interpreting the White test statistic, denoted as LM and testing the null hypothesis of homoskedasticity against the alternative of heteroskedasticity, is to reject the null if LM exceeds the critical value \chi^2_{\alpha, df} from the chi-squared distribution, where \alpha is the chosen significance level (e.g., 0.05) and df equals the number of auxiliary regressors minus one. This critical value is obtained from standard chi-squared tables or statistical software based on the degrees of freedom determined by the model's regressors. Alternatively, the p-value approach computes the probability that a chi-squared random variable with df degrees of freedom exceeds or equals the observed LM under the null hypothesis; the null is rejected if this p-value is less than \alpha. For example, at \alpha = 0.05 and df = 2, the critical value is approximately 5.99; an LM statistic of 7.2 would yield a p-value below 0.05, leading to rejection of homoskedasticity. The chi-squared distribution provides an asymptotic approximation for the LM statistic, with validity improving as the sample size n grows larger, assuming the errors are independent and the regressors are exogenous. In small samples, where the approximation may be less reliable, bootstrapping procedures can offer improved finite-sample performance for p-value estimation, though such methods are beyond the standard asymptotic framework.

Limitations and Comparisons

Key Limitations

One key limitation of the White test is its tendency to over-reject the null hypothesis of homoskedasticity in the presence of model specification errors, such as omitted variables or incorrect functional forms, rather than detecting heteroskedasticity alone. A significant test statistic may thus reflect general model misspecification, leading to false positives for heteroskedasticity that require additional diagnostics to disentangle the true cause. The test's asymptotic chi-squared distribution under the null hypothesis renders it unreliable in small samples, typically those with fewer than 100 observations, where its power to detect heteroskedasticity is low and rejection rates can be distorted. Simulations indicate poor performance relative to other tests, particularly for certain patterns like time-varying heteroskedasticity, underscoring the need for caution or alternative approaches in limited datasets. Computationally, the auxiliary regression includes up to \frac{k(k+1)}{2} terms comprising squares and cross-products of the original k regressors, which can rapidly consume degrees of freedom and become unwieldy with more than a few predictors. This proliferation of terms often induces severe multicollinearity, especially when original regressors are highly correlated, potentially resulting in near-singular matrices and unstable estimates. Finally, even when the test rejects homoskedasticity, it provides no insight into the specific form or structure of the heteroskedasticity, necessitating further exploratory analyses to inform corrective measures like feasible generalized least squares.

Comparison to Other Tests

The White test offers a more general approach to detecting heteroskedasticity compared to the Breusch-Pagan test, as it does not impose a specific functional form on the error variance, such as the multiplicative heteroskedasticity assumed by the latter, and instead incorporates squared terms and cross-products of the explanatory variables in its auxiliary regression to capture nonlinear relationships. In contrast, the Breusch-Pagan test is simpler and computationally less intensive, relying on regressing the squared residuals solely on the original explanatory variables to test for linear dependencies in the variance, making it suitable when heteroskedasticity is expected to vary linearly with the regressors. This generality of the White test allows it to detect a broader range of heteroskedasticity patterns, though it requires more degrees of freedom and can be sensitive to model misspecification. Unlike the Goldfeld-Quandt test, which is parametric and focuses on monotonic heteroskedasticity by splitting the sample into two subgroups ordered by a suspected explanatory variable and comparing residual variances via an F-statistic, the White test is nonparametric in its specification of the variance function and does not require data partitioning or identification of a particular ordering variable, providing greater flexibility for arbitrary heteroskedasticity forms. The Goldfeld-Quandt test excels in scenarios where heteroskedasticity increases systematically with a single regressor, such as in cross-sectional data with income levels, but its power diminishes if the splitting variable is misspecified or if the heteroskedasticity is non-monotonic. The White test is typically preferred in econometric applications involving unknown or complex heteroskedasticity patterns, where its comprehensive auxiliary regression enhances detection without prior assumptions on variance structure, although it may exhibit lower power relative to more targeted tests like Breusch-Pagan or Goldfeld-Quandt when a specific form is anticipated. Post-1980 simulation studies have demonstrated that the White test achieves higher rejection rates under complex heteroskedasticity, such as quadratic or interactive forms, compared to tests assuming simpler structures, underscoring its utility in general diagnostic settings.

Software Implementations

In R

The White test for heteroskedasticity can be performed in R using the lmtest package, which implements the test through the bptest function with a variance formula specifying the original regressors, their squares, and pairwise interactions to capture the general form of heteroskedasticity. To use this functionality, first install the package via install.packages("lmtest") and load it with library(lmtest). The package handles the auxiliary regression and computation of the test statistic internally, requiring only a fitted linear model object from lm() and the appropriate variance formula. For example, consider a linear regression model model <- lm(y ~ x1 + x2, data = df). The White test is then executed as follows:
r
bptest(model, ~ x1 + x2 + I(x1^2) + I(x2^2) + x1:x2, data = df)
This code regresses the squared residuals on the specified terms and derives the Lagrange multiplier statistic based on nR², where n is the sample size and R² is from the auxiliary regression. The output reports the LM statistic (equivalent to nR²), degrees of freedom (equal to the number of terms in the variance formula excluding the intercept), and the corresponding p-value under the chi-squared distribution. For instance, an output might show LM = 12.34, df = 5, p-value = 0.03, indicating rejection of the null hypothesis of homoskedasticity at the 5% level if p < 0.05.

In Python

The White test for heteroscedasticity is implemented in Python through the statsmodels library, a widely used package for econometric and statistical modeling that provides robust diagnostic tools. The core function, het_white from the statsmodels.stats.diagnostic module, performs White's Lagrange multiplier test by fitting an auxiliary regression of squared residuals against the original exogenous variables, their squares, and pairwise cross-products. This implementation assumes the presence of a constant term in the exogenous variables and is designed to work seamlessly with results from ordinary least squares (OLS) regressions fitted via statsmodels.api. Installation of statsmodels is straightforward using the Python package installer pip, executed via the command pip install statsmodels in a terminal or command prompt; this requires Python 3.8 or later and dependencies such as and , which are automatically resolved. Once installed, the function integrates with OLS model outputs, where residuals are obtained from model.resid and exogenous variables from model.model.exog, enabling a direct application to fitted regression objects. A typical usage involves importing the necessary modules, fitting an OLS model, and passing the residuals and exogenous variables to het_white. The following code block illustrates this process with synthetic data exhibiting heteroscedasticity for demonstration:
python
import numpy as np
import statsmodels.api as sm
from statsmodels.stats.diagnostic import het_white

# Generate example data
np.random.seed(0)
n = 100
X = np.random.normal(size=(n, 2))
X = sm.add_constant(X)  # Add intercept
y = 2 * X[:, 1] + np.random.normal(size=n) * (X[:, 1]**2 + 1)  # Induced heteroscedasticity

# Fit OLS model
model = sm.OLS(y, X).fit()
resid = model.resid
exog = model.model.exog

# Compute White test
lm_stat, lm_pvalue, f_stat, f_pvalue = het_white(resid, exog)

print(f"LM Statistic: {lm_stat:.4f}")
print(f"LM P-value: {lm_pvalue:.4f}")
print(f"F Statistic: {f_stat:.4f}")
print(f"F P-value: {f_pvalue:.4f}")
This example yields outputs such as an LM statistic around 8.45, LM p-value of 0.0147, F statistic of 4.12, and F p-value of 0.0192 (exact values may vary slightly by version), confirming heteroscedasticity at the 5% level. The het_white function returns a tuple containing the LM test statistic (a float under the null of homoscedasticity), its p-value based on the chi-squared distribution, an alternative F-statistic, and the corresponding F p-value; degrees of freedom for the chi-squared test equal the number of terms in the auxiliary regression, typically k(k+1)/2 where k is the number of non-constant regressors. In application, a p-value below 0.05 from either test indicates rejection of homoscedasticity, prompting further model adjustments.

In Stata and Other Tools

In Stata, the White test for heteroskedasticity is implemented as a postestimation command following a linear regression model estimated with the regress procedure. The syntax involves running estat imtest, white after the regression, which automatically performs the auxiliary regression of squared residuals on the original regressors and their cross-products, yielding the test statistic as nR^2 (where n is the sample size and R^2 is from the auxiliary model), distributed as chi-squared under the null hypothesis of homoskedasticity. The output reports the chi-squared statistic, degrees of freedom, and associated p-value. For instance, after regress y x1 x2, executing estat imtest, white provides these results directly. This integration facilitates efficient econometric analysis, with a p-value below 0.05 signaling rejection of homoskedasticity. In SAS, White's test is supported within PROC REG by including the SPEC option in the MODEL statement, which computes the test for heteroskedasticity and nonlinearity based on the auxiliary regression framework. Alternatively, users can output residuals from PROC REG using the OUTPUT statement and then apply PROC GLM to regress the squared residuals (along with cross-products of regressors) on the explanatory variables to derive the nR^2 statistic manually. The procedure outputs the chi-squared statistic and p-value for inference. EViews implements the White test via the menu sequence View > Residual Diagnostics > Heteroskedasticity Tests, selecting the White option after estimating a model. This generates the , , and , with the command-line equivalent involving the hettest procedure with the white specification for automated computation. Such tools streamline detection in proprietary econometric software environments.

References

  1. [1]
    A Heteroskedasticity-Consistent Covariance Matrix Estimator and a ...
    May 1, 1980 · This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic.Missing: heteroscedasticity original
  2. [2]
    [PDF] A Heteroskedasticity-Consistent Covariance Matrix Estimator and a ...
    A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for. Heteroskedasticity. Author(s): Halbert White. Source: Econometrica , May ...
  3. [3]
    [PDF] Lecture 12 Heteroscedasticity - Bauer College of Business
    It is a general tests designed to detect any linear forms of heteroskedasticity. The White test is an asymptotic Wald-type test, normality is not needed.
  4. [4]
    Regression with Stata Chapter 2 – Regression Diagnostics
    Now let's look at a couple of commands that test for heteroscedasticity. The first test on heteroskedasticity given by imest is the White's test and the second ...
  5. [5]
    A Heteroskedasticity-Consistent Covariance Matrix Estimator and a ...
    826 HALBERT WHITE power of Godfrey's test must derive, as does that of the statistic (3), from the inconsistency of 5, (X'X/ n) for '4. The present testing ...
  6. [6]
    Thirty Years Of Heteroskedasticity-robust Inference - IDEAS/RePEc
    White (1980) marked the beginning of a new era for inference in econometrics. It introduced the revolutionary idea of inference that is robust to ...
  7. [7]
    [PDF] A Heteroskedasticity-Consistent Covariance Matrix Estimator ...
    May 1, 1980 · A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity ... Econometrica}, year={1980}, volume ...
  8. [8]
    7 Classical Assumptions of Ordinary Least Squares (OLS) Linear ...
    This preferred condition is known as homoscedasticity (same scatter). If the variance changes, we refer to that as heteroscedasticity (different scatter). The ...
  9. [9]
    Homoscedasticity - Statistics Solutions
    The assumption of homoscedasticity (meaning “same variance”) is central to linear regression models. Homoscedasticity describes a situation in which the error ...
  10. [10]
    5.5 The Gauss-Markov Theorem - Introduction to Econometrics with R
    The Gauss-Markov theorem states that, in the class of conditionally unbiased linear estimators, the OLS estimator has this property under certain conditions.
  11. [11]
    Introductory Econometrics Chapter 19: Heteroskedasticity
    Heteroskedasticity has serious consequences for the OLS estimator. Although the OLS estimator remains unbiased, the estimated SE is wrong. Because of this ...Missing: implications | Show results with:implications
  12. [12]
    [PDF] Heteroskedasticity —Chapter 8 of Wooldridge's textbook
    dispersion of expenditure (variance of expenditure) can rise when people's income rises. 2. Heteroskedasticity is present when the variance of error term ...
  13. [13]
    [PDF] Heteroskedasticity
    Heteroskedasticity means that the variance of the errors is not constant across observations. In particular the variance of the errors may be a function of ...
  14. [14]
    [PDF] Chapter 8 Heteroskedasticity - IIT Kanpur
    Heteroskedasticity (Var(y) decreases with x). Examples: Suppose in a simple linear regression model, x denote the income and y denotes the expenditure on ...
  15. [15]
    [PDF] NBER Working Paper Series MACROECONOMICS AND ARCH ...
    Grier and Perry. (2000) and Fountas and Karanasos (2007) use such models to conclude that inflation and output volatility also can depress real GDP growth, ...
  16. [16]
    [PDF] Modeling the Volatility of Real GDP Growth
    Researchers frequently employ some form of a generalized autoregressive conditional heteroskedasticity (GARCH) modeling strategy to examine the volatility of ...
  17. [17]
  18. [18]
    A note on bootstrapped White's test for heteroskedasticity in ...
    In their simulation study, the bootstrap procedure was demonstrated to perform better in small samples as compared to the White's test. Jeong and Lee presented ...Missing: approximation | Show results with:approximation
  19. [19]
    [PDF] Heteroskedasticity in Multiple Regression Analysis - ERIC
    Within the context of OLS regression, heteroskedasticity can be induced either through the way in which the dependent variable is being measured or through how ...
  20. [20]
  21. [21]
    [PDF] lmtest: Testing Linear Regression Models
    The lmtest package provides tests, data sets, and examples for diagnostic checking in linear regression models, and tools for inference in parametric models.
  22. [22]
    statsmodels.stats.diagnostic.het_white
    White's Lagrange Multiplier Test for Heteroscedasticity. Parameters:¶. residarray_like. The residuals. The squared residuals are used as the endogenous variable ...Missing: documentation | Show results with:documentation
  23. [23]
    Installing statsmodels - statsmodels 0.14.4
    The easiest way to install statsmodels is to install it as part of the Anaconda distribution, a cross-platform distribution for data analysis and scientific ...Obtaining the Source · Installation from Source
  24. [24]
    [PDF] regress postestimation - Stata
    The heteroskedasticity test computed by estat imtest is similar to the general test for heteroskedasticity that was proposed by White (1980). Cameron and ...<|separator|>
  25. [25]
    How to perform Heteroscedasticity test in STATA for time series data?
    Oct 16, 2018 · White test for heteroscedasticity. To check heteroscedasticity using the White test, use the following command in STATA: estat imtest, white.
  26. [26]
    Testing for Heteroscedasticity - SAS Help Center
    Feb 21, 2025 · If you specify the HCC or WHITE option in the MODEL statement ... When PROC REG determines this matrix to be numerically singular, a ...
  27. [27]
    How to Perform White's Test in SAS - Statology
    Apr 21, 2023 · This tutorial explains how to perform White's test in SAS to determine whether or not heteroscedasticity is a problem in a given regression model.
  28. [28]
    How to Detect Heteroscedasticity using Eviews - Datapott Analytics
    Go to View -> Residual Diagnostics -> Heteroskedasticity Tests . Select White and click OK . Similar to the Breusch-Pagan test, a low p-value indicates ...Missing: @hettest
  29. [29]