Error correction model
An error correction model (ECM) is a statistical model in econometrics that describes the short-run dynamics of cointegrated, non-stationary time series variables while incorporating their long-run equilibrium relationship, allowing deviations from equilibrium to be corrected over time.[1] Developed to address the limitations of differencing non-stationary series, which can obscure long-run relationships, the ECM represents such series in a form where changes in one variable depend on lagged changes and an "error correction" term that measures the deviation from long-run equilibrium.[1] The foundational work on ECMs stems from the concept of cointegration, introduced by Robert F. Engle and Clive W. J. Granger in 1987, who showed that if two or more integrated time series of order one (I(1)) are cointegrated—meaning a linear combination of them is stationary (I(0))—they can be modeled using an ECM to capture both short-term adjustments and long-term co-movements.[1] In its basic bivariate form, the model is specified as \Delta y_t = \alpha + \beta \Delta x_t + \gamma (y_{t-1} - \delta x_{t-1}) + \epsilon_t, where \Delta denotes first differences, \gamma (typically negative) is the speed of adjustment coefficient indicating how quickly the system returns to equilibrium, and y_{t-1} - \delta x_{t-1} is the lagged equilibrium error.[2] For multivariate cases, the vector error correction model (VECM) extends this framework to n variables with r cointegrating relations, expressed as \Delta \mathbf{y}_t = \boldsymbol{\Pi} \mathbf{y}_{t-1} + \sum_{i=1}^{p-1} \boldsymbol{\Gamma}_i \Delta \mathbf{y}_{t-i} + \boldsymbol{\mu} + \boldsymbol{\epsilon}_t, where \boldsymbol{\Pi} = \boldsymbol{\alpha} \boldsymbol{\beta}' decomposes into adjustment and cointegrating vectors.[3] ECMs are estimated through procedures like the two-step Engle-Granger method—first regressing levels to obtain residuals for cointegration testing, then estimating the ECM—or maximum likelihood approaches such as Johansen's for VECMs, which determine the cointegration rank.[1] These models are widely applied in economics to analyze relationships like consumption and income, exchange rates and interest rates, or output and employment, providing insights into both transient shocks and stable equilibria without spurious regression issues.[2]Fundamentals
Stationarity and Unit Roots
In time series analysis, a stochastic process is said to be weakly stationary, or second-order stationary, if its mean is constant over time, its variance is finite and constant, and the autocovariance between any two observations depends solely on the time lag between them.[4] Strict stationarity, a stronger condition, requires that the joint probability distribution of any collection of observations is invariant to shifts in time.[5] Weak stationarity is often sufficient for practical modeling, as it ensures stable statistical properties essential for reliable inference, whereas strict stationarity implies weak stationarity when moments exist but is harder to verify empirically.[6] Stationary processes exhibit these constant properties, allowing for predictable behavior and valid application of standard statistical methods like autoregressive modeling. In contrast, non-stationary processes, such as those integrated of order one, denoted I(1), do not have constant means or variances and require first differencing to achieve stationarity.[7] I(1) processes are characterized by a unit root, where shocks have permanent effects, leading to trends that wander without bound.[8] The unit root concept arises in an autoregressive process of order one, AR(1), defined as y_t = \rho y_{t-1} + \epsilon_t, where \epsilon_t is white noise; if \rho = 1, the process becomes a random walk, which is non-stationary with variance growing linearly over time.[8] Regressing two independent I(1) series can produce spurious regressions, where high R^2 values and significant t-statistics suggest a false relationship due to shared non-stationarity rather than true dependence.[9] To detect unit roots, the Dickey-Fuller test examines the null hypothesis of a unit root by estimating the AR(1) model and computing a test statistic on the coefficient of the lagged level, which follows a non-standard distribution under the null.[10] The augmented Dickey-Fuller (ADF) test extends this by accounting for higher-order autoregression in the errors, using the regression equation \Delta y_t = \alpha + \beta y_{t-1} + \sum_{i=1}^{p} \gamma_i \Delta y_{t-i} + \epsilon_t, where the null hypothesis is \beta = 0 (unit root), and p is chosen to ensure white noise residuals.[11] The Phillips-Perron test modifies the Dickey-Fuller statistic non-parametrically to adjust for serial correlation and heteroskedasticity in the errors, making it robust without explicitly modeling lags.[12] Ignoring non-stationarity in analysis leads to invalid statistical inference, as standard t-tests and F-tests have non-standard distributions, often overstating significance.[8] In forecasting, unit root processes produce intervals with explosive variance that widens proportionally with the horizon, yielding unreliable long-term predictions.[13]Cointegration
Cointegration refers to a statistical relationship among two or more non-stationary time series variables that are integrated of the same order, typically I(1), such that a linear combination of these series is stationary, or I(0). This property implies the existence of a stable long-run equilibrium relationship despite the individual series exhibiting trends or random walks. Formally, if y_t and x_t are both I(1) processes, they are cointegrated if there exists a vector \beta such that u_t = y_t - \beta x_t is I(0), capturing deviations from the long-run equilibrium. The concept of order of integration is central to cointegration, where a series is denoted as I(d) if it requires differencing d times to achieve stationarity, and cointegrated series of order (d, b) mean they are I(d) but a linear combination is I(d - b), with b > 0 indicating the cointegration rank or number of such independent relations. In the common case of I(1) series, b = 1 yields an I(0) combination, signifying one long-run relation. This framework allows multivariate systems to share a common stochastic trend while maintaining equilibrium ties. To detect cointegration, residual-based tests examine the stationarity of residuals from an ordinary least squares regression of the levels of the series, such as y_t = \beta x_t + u_t, where the cointegrating vector \beta is estimated first; if u_t is stationary, cointegration is supported. These tests address the limitations of univariate stationarity checks by focusing on multivariate dependencies. The Engle-Granger representation theorem establishes that if series are cointegrated, they admit an error correction representation, where short-run dynamics adjust toward the long-run equilibrium defined by the cointegrating relation. This theorem links cointegration directly to error correction mechanisms, enabling models that capture both equilibrium deviations and corrective forces. Cointegration circumvents the pitfalls of spurious regressions, where non-stationary series might appear correlated due to shared trends rather than genuine relations, as highlighted in early critiques of time series analysis. By identifying true long-run equilibria, it facilitates modeling of disequilibrium corrections, essential for forecasting and policy analysis in economics.[9]Model Formulation
Single-Equation Error Correction Model
The single-equation error correction model (ECM) provides a framework for modeling the dynamics of two integrated time series, y_t and x_t, that are cointegrated, meaning they share a stable long-run relationship despite being non-stationary individually. This model captures how deviations from the long-run equilibrium influence short-run changes in y_t, allowing for the joint analysis of transient fluctuations and persistent adjustments. Introduced in the context of econometric modeling of aggregate relationships like consumption and income, the single-equation ECM extends earlier partial adjustment ideas to explicitly incorporate cointegration.[1] The general form of the single-equation ECM for a bivariate system is given by \Delta y_t = \alpha + \sum_{i=1}^{p-1} \beta_i \Delta y_{t-i} + \sum_{j=0}^{q-1} \gamma_j \Delta x_{t-j} + \lambda (y_{t-1} - \beta x_{t-1}) + \varepsilon_t, where \Delta denotes the first difference, \alpha is a constant term, the sums of lagged differences represent short-run dynamics, \beta is the long-run cointegrating coefficient, and \lambda is the speed of adjustment parameter. The term (y_{t-1} - \beta x_{t-1}) serves as the error correction term, quantifying the disequilibrium from the long-run relation y_t = \beta x_t + u_t where u_t is stationary. In this setup, short-run changes in y_t and x_t drive immediate responses, while the lagged error correction term enforces mean reversion to equilibrium; for model stability, \lambda < 0, ensuring that positive disequilibria prompt corrective reductions in \Delta y_t.[1] This form arises from reparameterizing an unrestricted vector autoregression (VAR) in levels to highlight the cointegration structure, as per the Granger Representation Theorem. For a simple VAR(1) case, the differenced form is \Delta y_t = \mu_1 + (a_{11} - 1) y_{t-1} + a_{12} x_{t-1} + \varepsilon_{1t}. Under cointegration, the long-run matrix has rank 1, \Pi = \alpha \beta' with \beta' = [1, -\beta], so the equation becomes \Delta y_t = \mu_1 + \lambda (y_{t-1} - \beta x_{t-1}) + \varepsilon_{1t}, where \lambda = \alpha_1 < 0. This derivation extends to higher-order VAR(p) models by including additional lagged differences, preserving the error correction mechanism.[1] The model relies on key assumptions: the existence of exactly one cointegrating relation between y_t and x_t, weak exogeneity of x_t (implying it is not influenced by errors in the y_t equation), and serially uncorrelated errors \varepsilon_t to ensure valid inference. These conditions support the model's ability to avoid spurious regressions by differencing non-stationary series while retaining long-run information through the levels-based error term. A primary property is the super-consistency of the long-run estimator \hat{\beta}, which converges to the true value at rate T rather than the standard \sqrt{T}, enhancing precision in finite samples; additionally, the ECM framework integrates I(1) levels and I(0) differences, yielding stationary regressors overall for consistent short-run parameter estimates.[1]Vector Error Correction Model
The Vector Error Correction Model (VECM) extends the single-equation error correction model to multivariate time series systems, allowing for the analysis of multiple integrated variables that may share long-run equilibrium relationships. In a VECM framework, the model captures both short-run dynamics and long-run cointegrating relations among a vector of I(1) variables, enabling the study of feedback effects across all variables in the system.[3] The standard VECM formulation for an n-dimensional vector Y_t is given by: \Delta Y_t = \mu + \sum_{i=1}^{k-1} \Phi_i \Delta Y_{t-i} + \Pi Y_{t-1} + \varepsilon_t, where \mu is a constant term, \Phi_i are matrices of short-run coefficients, \Pi is the long-run coefficient matrix, and \varepsilon_t is a vector of white noise errors. The matrix \Pi is decomposed as \Pi = \alpha \beta', where \alpha (n × r) represents the adjustment speeds toward equilibrium, and \beta (n × r) is the cointegration matrix defining the long-run relations, with r denoting the cointegration rank.[3] The cointegration rank r determines the number of linearly independent long-run equilibrium relations in the system, where 0 < r < n implies r cointegrating vectors and (n - r) common stochastic trends. Identification of \beta typically involves normalization, such as setting one element to unity in each cointegrating vector to resolve indeterminacy.[3] In interpretation, the term \Pi Y_{t-1} = \alpha \beta' Y_{t-1} measures deviations from long-run equilibrium, with \alpha indicating how each variable adjusts to these disequilibria, while the \Phi_i \Delta Y_{t-i} terms model short-run dynamics and allow for contemporaneous feedback among all variables. Key assumptions underlying the VECM include that the variables in Y_t are I(1) with no I(2) components, the errors \varepsilon_t are Gaussian and possibly serially correlated but with no unit roots in the error correction process, and the system may incorporate weak exogeneity for certain variables in conditional modeling. Relative to the single-equation error correction model, which serves as a special case for r=1 with focus on one dependent variable, the VECM offers advantages in capturing system-wide dynamics and multiple equilibria, making it particularly suitable for policy analysis in interdependent economic systems.[3]Estimation Methods
Engle-Granger Two-Step Procedure
The Engle-Granger two-step procedure provides a straightforward method for estimating a single-equation error correction model (ECM) in the presence of cointegration between non-stationary time series variables. Developed as part of the foundational work on cointegration, it relies on ordinary least squares (OLS) regressions to first identify the long-run equilibrium and then model short-run dynamics with error correction. In the first step, the long-run cointegrating relationship is estimated via OLS on the static regression: y_t = \beta x_t + u_t where y_t and x_t are I(1) variables, \beta is the cointegrating parameter, and u_t represents deviations from equilibrium. The residuals \hat{u}_t from this regression are then tested for stationarity using an Augmented Dickey-Fuller (ADF) test (or similar unit root test) on the equation: \Delta \hat{u}_t = \rho \hat{u}_{t-1} + \sum_{i=1}^{k} \gamma_i \Delta \hat{u}_{t-i} + \epsilon_t, with the null hypothesis \rho = 0 indicating no cointegration; rejection confirms the existence of a cointegrating relation. Critical values for this test must account for the generated regressor nature of \hat{u}_t, often using specialized distributions.[14] The second step involves estimating the ECM using OLS on the differenced form: \Delta y_t = \alpha + \sum_{i=1}^{p} \beta_i \Delta y_{t-i} + \sum_{j=0}^{q} \gamma_j \Delta x_{t-j} + \lambda \hat{u}_{t-1} + \varepsilon_t, where \hat{u}_{t-1} are the lagged residuals from step 1 serving as the error correction term, \lambda (expected to be negative and significant) measures the speed of adjustment to equilibrium, and lags p and q are selected based on information criteria to ensure residuals are white noise. This step captures both short-run changes in the variables and the long-run equilibrium correction.[14] This procedure offers key advantages in its simplicity and accessibility, requiring only standard OLS software without the need for full-system estimation or advanced optimization techniques, making it suitable for applied analysis of bivariate relations.[15] However, it has notable limitations, including bias in the single-equation framework that can lead to inefficient estimates, particularly in small samples; invalid standard errors for the long-run \beta due to the two-stage nature and endogeneity issues; and restriction to a single cointegrating relation, overlooking potential multivariate dynamics. These issues can result in poor performance for inference on the cointegrating vector and adjustment speeds.[16][15][17] Following estimation, the significance of \lambda is assessed via a t-test, where a value less than -2 (approximately) at conventional levels supports the error correction mechanism. As an alternative addressing pre-testing requirements, the autoregressive distributed lag (ARDL) bounds testing approach can be considered for robustness, though it extends beyond the residual-based focus here.[18]Johansen Maximum Likelihood Estimation
The Johansen maximum likelihood estimation procedure provides a full-information approach to estimating vector error correction models (VECMs) for multivariate cointegrated systems. It begins by reparameterizing a vector autoregressive (VAR) model in levels, typically of order k, as a VECM: \Delta Y_t = \Pi Y_{t-1} + \sum_{i=1}^{k-1} \Gamma_i \Delta Y_{t-i} + \mu + \epsilon_t, where Y_t is an n \times 1 vector of I(1) variables, \Pi = \alpha \beta' with \alpha and \beta being n \times r matrices of adjustment speeds and cointegrating vectors (rank r < n), \Gamma_i = - (I_n - A_1 - \cdots - A_i), \mu is a constant term, and \epsilon_t \sim N(0, \Omega). This transformation imposes the cointegration restriction on the long-run matrix \Pi, assuming Gaussian errors for likelihood maximization.[19] The estimation maximizes the Gaussian likelihood function for the VECM, which can be concentrated by substituting the short-run dynamics and using reduced-rank regression on \Pi. Specifically, the procedure involves regressing \Delta Y_t on lagged differences \Delta Y_{t-i} to obtain residuals R_{0t}, and regressing Y_{t-1} on the same lagged differences to obtain residuals R_{1t}. The cointegrating relations are then estimated via the eigenvalue decomposition of the matrix \hat{\Pi} = R_0 R_1' (normalized by residual covariances), where the eigenvectors corresponding to the r largest eigenvalues yield \hat{\beta}, and \hat{\alpha} is obtained from a subsequent regression. This reduced-rank approach ensures efficient estimation under the cointegration hypothesis.[20][19] To determine the cointegration rank r, the procedure employs sequential likelihood ratio tests: the trace test and the maximum eigenvalue test. The trace test statistic for the null hypothesis of at most r cointegrating relations is \lambda_{\text{trace}}(r) = -T \sum_{i=r+1}^{n} \log(1 - \hat{\lambda}_i), where T is the sample size and \hat{\lambda}_i are the ordered eigenvalues from the reduced-rank regression (\hat{\lambda}_1 \geq \cdots \geq \hat{\lambda}_n). The maximum eigenvalue test for the null of rank r against the alternative of rank r+1 is \lambda_{\max}(r, r+1) = -T \log(1 - \hat{\lambda}_{r+1}). Testing proceeds sequentially from r=0 upward until the null is not rejected, using critical values derived from the asymptotic distribution under the null (accounting for deterministic terms like intercepts). These tests provide a framework for inference on r.[19] Identification of the cointegrating vectors requires normalization and restrictions on \beta, as the decomposition \Pi = \alpha \beta' is unique only up to rotation. Typically, each column of \beta is normalized by setting one coefficient to 1 (e.g., on the endogenous variable of interest), ensuring uniqueness for r=1. For r > 1, just-identification imposes r(n - r) linear restrictions, such as zero restrictions on certain elements of \beta or \alpha, while over-identification allows testing economic hypotheses via likelihood ratio statistics. These restrictions maintain the just-identified structure for efficient estimation.[20] The Johansen procedure offers several advantages, including asymptotic efficiency from full maximum likelihood estimation, which provides consistent standard errors, confidence intervals for cointegrating parameters, and tests for linear restrictions on \alpha and \beta. Unlike two-step methods, it directly handles multivariate systems with multiple cointegrating relations and avoids bias from preliminary estimations. Extensions accommodate I(2) processes by modeling polynomial cointegration, testing for I(2) rank via additional reduced-rank conditions on second differences.[19] Implementations of the Johansen procedure are available in statistical software, such as theurca and tsDyn packages in R for reduced-rank estimation and rank testing, and the vec and vecrank commands in Stata, which support various deterministic term specifications.
Historical Development
Early Foundations
The foundations of the error correction model (ECM) trace back to early efforts in time series analysis to address non-stationarity and dynamic relationships in economic data. In the late 1960s, Clive Granger's work on causality emphasized the importance of filtering non-stationary components in econometric models to avoid misleading inferences about causal directions between time series.[21] Concurrently, the Box-Jenkins methodology introduced autoregressive integrated moving average (ARIMA) models, which advocated differencing non-stationary series to achieve stationarity, laying groundwork for handling integrated processes in economic forecasting.[22] These approaches highlighted the pitfalls of ignoring unit roots but did not yet integrate long-run equilibrium concepts. Earlier precursors to the ECM appeared in macroeconomic modeling during the 1960s. John Denis Sargan's 1964 analysis of wages and prices in the UK employed a partial adjustment mechanism, where deviations from long-run equilibrium influenced short-run dynamics, effectively introducing an error correction term to capture adjustment speeds in distributed lag models.[23] This idea resurfaced in the 1970s amid economic turbulence from the energy crises, which exposed limitations in static models and spurred demand for dynamic frameworks to model stagflation and supply shocks in macroeconometrics.[24] James Davidson, David Hendry, and colleagues extended this in 1978 through a buffer stock model of UK consumer expenditure, incorporating partial adjustment to income changes as a mechanism for equilibrating deviations from target levels.[25] The 1980s marked the emergence of cointegration theory, directly informing the ECM. Granger's 1981 paper explicitly proposed error correction representations for economic time series, arguing that stationary linear combinations of non-stationary variables could represent long-run equilibria, with short-run adjustments correcting disequilibria.[26] James Stock's asymptotic analysis further underscored the issue of spurious regressions in non-stationary data, showing that least squares estimators of cointegrating parameters converge super-consistently under cointegration, thus necessitating error correction formulations to mitigate biases.[27] Culminating these developments, Robert Engle and Granger's seminal 1987 paper formalized cointegration as the existence of stationary error correction terms in vector autoregressions, providing the representational theorem linking ECMs to integrated processes and enabling rigorous testing of long-run relationships.[1]Key Advancements
Building on the Engle-Granger two-step procedure for single-equation models, key advancements in error correction modeling emerged in the late 1980s and early 1990s with the development of multivariate frameworks.[1] Their work on cointegration was recognized with the 2003 Nobel Prize in Economic Sciences.[28] Søren Johansen introduced the vector error correction model (VECM) in 1988, providing a system-based approach to cointegration analysis within vector autoregressive processes.[19] This formulation allowed for the estimation of multiple cointegrating relationships and the testing of linear hypotheses on cointegration vectors using maximum likelihood methods. In his 1991 work, Johansen extended this by deriving likelihood ratio tests for determining the cointegration rank and establishing the asymptotic distributions of these tests under non-standard conditions.[29] These contributions shifted the focus from univariate to multivariate dynamics, enabling more comprehensive modeling of long-run equilibria and short-run adjustments in economic systems.[29] Peter C. B. Phillips advanced the theoretical foundations in 1991 by analyzing non-standard asymptotic distributions for cointegration estimators in multivariate settings.[30] His work highlighted the biases in conventional least squares estimators and proposed fully modified ordinary least squares (FM-OLS) as a bias-corrected alternative, originally detailed in collaboration with Bruce E. Hansen in 1990, to achieve super-consistent and asymptotically efficient inference.[31] These innovations improved the reliability of long-run parameter estimates in the presence of endogenous regressors and unit roots. In 1995, Hiro Y. Toda and Taku Yamamoto proposed a method for testing Granger causality directly in levels using augmented vector autoregressions, bypassing the need for pre-testing for unit roots or cointegration.[32] This approach ensures valid inference even under integration or cointegration, making it robust for applied econometric analysis without assuming knowledge of the integration order. The 2000s saw extensions to nonlinear and panel data contexts. Bruce E. Hansen and Byeongseon Seo developed the threshold vector error correction model in 2002, allowing the adjustment speed toward equilibrium to vary based on the deviation's magnitude, capturing asymmetric dynamics in economic relationships.[33] For panel settings, Peter Pedroni provided critical values and test statistics for cointegration in heterogeneous panels with multiple regressors in 1999, facilitating analysis across cross-sectional units with differing long-run relationships.[34] Post-2010 developments incorporated Bayesian methods and machine learning techniques. Bayesian approaches to VECMs, such as those allowing time-varying cointegration ranks, enable probabilistic inference on model uncertainty and parameter evolution, as exemplified in Koop, Leon-Gonzalez, and Strachan's 2011 framework.[35] Machine learning integrations for variable selection in VECMs have enhanced model specification in high-dimensional data, with applications extending to climate econometrics by 2023, where cointegration analysis combined with neural networks uncovers nonlinear impacts of climate variables on outcomes like crop yields.[36] Further advancements by 2024 include hybrid ECM-LSTM models for predicting financial time series, improving short-term forecasting accuracy.[37] These advancements marked a transition from ad-hoc, single-equation methods to rigorous, system-wide modeling, fostering widespread adoption in macroeconomics for analyzing equilibrium relationships in nonstationary data.[30]Applications and Examples
Economic Interpretations
In error correction models (ECMs), the cointegration vector captures long-run equilibrium relationships among economic variables, representing stable economic relations that hold despite short-term fluctuations. For instance, in the context of purchasing power parity (PPP), the cointegration between exchange rates and relative price levels indicates a long-run tendency for currencies to adjust such that goods cost the same across countries, after accounting for transportation and other costs.[38] Similarly, Okun's law posits a cointegrating relation between output and unemployment, where deviations from this equilibrium reflect cyclical gaps, with the cointegrating coefficient estimating the long-run trade-off between growth and employment.[39] These vectors provide economic insight into fundamental balances, such as supply-demand equilibria, by delineating the steady-state proportions that economic agents aim to maintain over time.[40] The error correction term in ECMs quantifies the short-run adjustment speed toward long-run equilibrium, illustrating how economies respond to disequilibria. A negative and significant coefficient on this term signifies that deviations from equilibrium are partially corrected in subsequent periods, with the magnitude indicating the pace of convergence—for example, a coefficient of -0.3 implies about 30% correction per period.[41] In vector ECMs (VECMs), impulse response functions derived from the model trace the dynamic effects of shocks, such as how a supply disruption propagates through prices and quantities before equilibrium is restored. This framework reveals the partial adjustment mechanisms in economic systems, where short-run dynamics incorporate both transitory shocks and the pull back to long-run relations. ECMs offer key policy implications by enabling analysis of disequilibria, particularly in monetary policy contexts. For example, in money demand models, the error correction term highlights deviations from long-run money-output balances, guiding central banks on interest rate adjustments to restore equilibrium without overreacting to short-run noise.[42] Moreover, in cointegrated systems, ECMs and VECMs outperform unrestricted vector autoregression (VAR) models for forecasting due to their incorporation of equilibrium constraints.[43] This superiority aids policymakers in simulating scenarios, such as the impact of fiscal shocks on inflation, while maintaining consistency with economic theory. Common applications of ECMs span diverse economic domains. In demand-supply models, they model how excess demand for money influences reserves accumulation in developing economies, capturing both long-run balance and short-run reserve adjustments.[44] For the term structure of interest rates, VECMs analyze cointegration among yields of different maturities, revealing how short-rate expectations drive long-rate movements and inform bond pricing under the expectations hypothesis.[45] In commodity markets, ECMs link prices to macroeconomic factors like interest rates, estimating adjustment speeds to global demand shocks and aiding in price stabilization policies.[46] Despite their strengths, ECMs assume linear adjustment processes, which may not capture nonlinear responses observed in real economies, such as threshold effects in trade or finance. Additionally, results are sensitive to model specification, including lag length and cointegration rank choices, potentially leading to biased estimates if misspecified.[47]Numerical Example
To illustrate the practical implementation of the Engle-Granger two-step procedure in a single-equation error correction model (ECM), consider a bivariate example using simulated quarterly time series data for log real GDP (y_t) and log real consumption (x_t), each with 100 observations. Both series are integrated of order one (I(1)), as confirmed by Augmented Dickey-Fuller (ADF) tests failing to reject the unit root null at the 5% level in levels but rejecting in first differences. However, they are cointegrated, reflecting a long-run equilibrium relationship typical in macroeconomic applications. The first step estimates the cointegrating regression: x_t = \alpha + \beta y_t + u_t. The ordinary least squares (OLS) estimate yields \hat{\beta} \approx 0.8 (t-statistic = 12.5, p < 0.01), implying consumption is approximately 80% of GDP in the long run. The residuals \hat{u}_t = x_t - \hat{\alpha} - 0.8 y_t are tested for stationarity using an ADF test, which rejects the unit root null (test statistic = -4.2, p < 0.05 using Engle-Granger critical values), confirming cointegration. In the second step, the ECM is estimated via OLS on first-differenced data including the lagged residual: \Delta x_t = \gamma + \lambda \hat{u}_{t-1} + \sum \phi_i \Delta x_{t-i} + \sum \delta_j \Delta y_{t-j} + \varepsilon_t, with lags selected by information criteria (e.g., one lag each). The key estimate is \hat{\lambda} \approx -0.3 (t-statistic = -3.2, p < 0.01), which is negative and significant as required for stability. Short-run coefficients include \hat{\phi}_1 \approx 0.2 (t = 1.8, p = 0.07) for lagged consumption growth and \hat{\delta}_0 \approx 0.4 (t = 4.1, p < 0.01) and \hat{\delta}_1 \approx 0.1 (t = 1.2, p = 0.23) for contemporaneous and lagged GDP growth. This implies that 30% of any disequilibrium from the long-run relationship is corrected each quarter, with consumption growth positively influenced by current GDP growth in the short run while showing mild persistence from its own lags. The following table summarizes the ECM coefficients and t-statistics:| Coefficient | Estimate | t-statistic | p-value |
|---|---|---|---|
| Intercept (\gamma) | 0.01 | 0.5 | 0.62 |
| Error correction (\lambda) | -0.30 | -3.2 | <0.01 |
| \Delta x_{t-1} (\phi_1) | 0.20 | 1.8 | 0.07 |
| \Delta y_t (\delta_0) | 0.40 | 4.1 | <0.01 |
| \Delta y_{t-1} (\delta_1) | 0.10 | 1.2 | 0.23 |