Fact-checked by Grok 2 weeks ago

Error correction model

An error correction model (ECM) is a in that describes the short-run dynamics of cointegrated, non-stationary variables while incorporating their long-run relationship, allowing deviations from to be corrected over time. Developed to address the limitations of differencing non-stationary series, which can obscure long-run relationships, the ECM represents such series in a form where changes in one variable depend on lagged changes and an "error correction" term that measures the deviation from long-run . The foundational work on ECMs stems from the concept of cointegration, introduced by Robert F. Engle and Clive W. J. Granger in 1987, who showed that if two or more integrated time series of order one (I(1)) are cointegrated—meaning a linear combination of them is stationary (I(0))—they can be modeled using an ECM to capture both short-term adjustments and long-term co-movements. In its basic bivariate form, the model is specified as \Delta y_t = \alpha + \beta \Delta x_t + \gamma (y_{t-1} - \delta x_{t-1}) + \epsilon_t, where \Delta denotes first differences, \gamma (typically negative) is the speed of adjustment coefficient indicating how quickly the system returns to equilibrium, and y_{t-1} - \delta x_{t-1} is the lagged equilibrium error. For multivariate cases, the vector error correction model (VECM) extends this framework to n variables with r cointegrating relations, expressed as \Delta \mathbf{y}_t = \boldsymbol{\Pi} \mathbf{y}_{t-1} + \sum_{i=1}^{p-1} \boldsymbol{\Gamma}_i \Delta \mathbf{y}_{t-i} + \boldsymbol{\mu} + \boldsymbol{\epsilon}_t, where \boldsymbol{\Pi} = \boldsymbol{\alpha} \boldsymbol{\beta}' decomposes into adjustment and cointegrating vectors. ECMs are estimated through procedures like the two-step Engle-Granger method—first regressing levels to obtain residuals for testing, then estimating the ECM—or maximum likelihood approaches such as Johansen's for VECMs, which determine the . These models are widely applied in to analyze relationships like and , exchange rates and interest rates, or output and , providing insights into both transient shocks and stable equilibria without spurious issues.

Fundamentals

Stationarity and Unit Roots

In time series analysis, a stochastic process is said to be weakly stationary, or second-order stationary, if its mean is constant over time, its variance is finite and constant, and the autocovariance between any two observations depends solely on the time lag between them. Strict stationarity, a stronger condition, requires that the joint probability distribution of any collection of observations is invariant to shifts in time. Weak stationarity is often sufficient for practical modeling, as it ensures stable statistical properties essential for reliable inference, whereas strict stationarity implies weak stationarity when moments exist but is harder to verify empirically. Stationary processes exhibit these constant properties, allowing for predictable behavior and valid application of standard statistical methods like autoregressive modeling. In contrast, non-stationary processes, such as those integrated of order one, denoted I(1), do not have constant means or variances and require first differencing to achieve stationarity. I(1) processes are characterized by a unit root, where shocks have permanent effects, leading to trends that wander without bound. The unit root concept arises in an autoregressive process of order one, AR(1), defined as y_t = \rho y_{t-1} + \epsilon_t, where \epsilon_t is ; if \rho = 1, the process becomes a , which is non-stationary with variance growing linearly over time. Regressing two independent I(1) series can produce spurious regressions, where high R^2 values and significant t-statistics suggest a false relationship due to shared non-stationarity rather than true dependence. To detect unit roots, the Dickey-Fuller test examines the null hypothesis of a by estimating the AR(1) model and computing a on the coefficient of the lagged level, which follows a non-standard distribution under the null. The augmented Dickey-Fuller (ADF) test extends this by accounting for higher-order autoregression in the errors, using the regression equation \Delta y_t = \alpha + \beta y_{t-1} + \sum_{i=1}^{p} \gamma_i \Delta y_{t-i} + \epsilon_t, where the null hypothesis is \beta = 0 (unit root), and p is chosen to ensure white noise residuals. The Phillips-Perron test modifies the Dickey-Fuller statistic non-parametrically to adjust for serial correlation and heteroskedasticity in the errors, making it robust without explicitly modeling lags. Ignoring non-stationarity in analysis leads to invalid statistical inference, as standard t-tests and F-tests have non-standard distributions, often overstating significance. In forecasting, unit root processes produce intervals with explosive variance that widens proportionally with the horizon, yielding unreliable long-term predictions.

Cointegration

Cointegration refers to a statistical relationship among two or more non-stationary variables that are integrated of the same order, typically I(1), such that a linear combination of these series is , or I(0). This property implies the existence of a stable long-run relationship despite the individual series exhibiting trends or random walks. Formally, if y_t and x_t are both I(1) processes, they are cointegrated if there exists a \beta such that u_t = y_t - \beta x_t is I(0), capturing deviations from the long-run . The concept of is central to , where a series is denoted as if it requires differencing d times to achieve stationarity, and cointegrated series of order (d, b) mean they are I(d) but a linear combination is I(d - b), with b > 0 indicating the or number of such independent relations. In the common case of I(1) series, b = 1 yields an I(0) combination, signifying one long-run relation. This framework allows multivariate systems to share a common trend while maintaining equilibrium ties. To detect cointegration, residual-based tests examine the stationarity of residuals from an ordinary least squares regression of the levels of the series, such as y_t = \beta x_t + u_t, where the cointegrating vector \beta is estimated first; if u_t is stationary, cointegration is supported. These tests address the limitations of univariate stationarity checks by focusing on multivariate dependencies. The Engle-Granger representation theorem establishes that if series are cointegrated, they admit an error correction representation, where short-run dynamics adjust toward the long-run equilibrium defined by the cointegrating relation. This theorem links cointegration directly to error correction mechanisms, enabling models that capture both equilibrium deviations and corrective forces. Cointegration circumvents the pitfalls of spurious regressions, where non-stationary series might appear correlated due to shared trends rather than genuine relations, as highlighted in early critiques of analysis. By identifying true long-run equilibria, it facilitates modeling of disequilibrium corrections, essential for and in .

Model Formulation

Single-Equation Error Correction Model

The single-equation (ECM) provides a for modeling the dynamics of two integrated , y_t and x_t, that are , meaning they share a long-run relationship despite being non-stationary individually. This model captures how deviations from the long-run influence short-run changes in y_t, allowing for the joint analysis of transient fluctuations and persistent adjustments. Introduced in the context of econometric modeling of aggregate relationships like and , the single-equation ECM extends earlier partial adjustment ideas to explicitly incorporate cointegration. The general form of the single-equation ECM for a bivariate system is given by \Delta y_t = \alpha + \sum_{i=1}^{p-1} \beta_i \Delta y_{t-i} + \sum_{j=0}^{q-1} \gamma_j \Delta x_{t-j} + \lambda (y_{t-1} - \beta x_{t-1}) + \varepsilon_t, where \Delta denotes the first difference, \alpha is a constant term, the sums of lagged differences represent short-run dynamics, \beta is the long-run cointegrating coefficient, and \lambda is the speed of adjustment parameter. The term (y_{t-1} - \beta x_{t-1}) serves as the error correction term, quantifying the disequilibrium from the long-run relation y_t = \beta x_t + u_t where u_t is stationary. In this setup, short-run changes in y_t and x_t drive immediate responses, while the lagged error correction term enforces mean reversion to equilibrium; for model stability, \lambda < 0, ensuring that positive disequilibria prompt corrective reductions in \Delta y_t. This form arises from reparameterizing an unrestricted vector autoregression (VAR) in levels to highlight the cointegration structure, as per the . For a simple VAR(1) case, the differenced form is \Delta y_t = \mu_1 + (a_{11} - 1) y_{t-1} + a_{12} x_{t-1} + \varepsilon_{1t}. Under cointegration, the long-run matrix has rank 1, \Pi = \alpha \beta' with \beta' = [1, -\beta], so the equation becomes \Delta y_t = \mu_1 + \lambda (y_{t-1} - \beta x_{t-1}) + \varepsilon_{1t}, where \lambda = \alpha_1 < 0. This derivation extends to higher-order VAR(p) models by including additional lagged differences, preserving the error correction mechanism. The model relies on key assumptions: the existence of exactly one cointegrating relation between y_t and x_t, weak exogeneity of x_t (implying it is not influenced by errors in the y_t equation), and serially uncorrelated errors \varepsilon_t to ensure valid inference. These conditions support the model's ability to avoid spurious regressions by differencing non-stationary series while retaining long-run information through the levels-based error term. A primary property is the super-consistency of the long-run estimator \hat{\beta}, which converges to the true value at rate T rather than the standard \sqrt{T}, enhancing precision in finite samples; additionally, the ECM framework integrates I(1) levels and I(0) differences, yielding stationary regressors overall for consistent short-run parameter estimates.

Vector Error Correction Model

The Vector Error Correction Model (VECM) extends the single-equation error correction model to multivariate time series systems, allowing for the analysis of multiple integrated variables that may share long-run equilibrium relationships. In a VECM framework, the model captures both short-run dynamics and long-run cointegrating relations among a vector of I(1) variables, enabling the study of feedback effects across all variables in the system. The standard VECM formulation for an n-dimensional vector Y_t is given by: \Delta Y_t = \mu + \sum_{i=1}^{k-1} \Phi_i \Delta Y_{t-i} + \Pi Y_{t-1} + \varepsilon_t, where \mu is a constant term, \Phi_i are matrices of short-run coefficients, \Pi is the long-run coefficient matrix, and \varepsilon_t is a vector of white noise errors. The matrix \Pi is decomposed as \Pi = \alpha \beta', where \alpha (n × r) represents the adjustment speeds toward equilibrium, and \beta (n × r) is the cointegration matrix defining the long-run relations, with r denoting the cointegration rank. The cointegration rank r determines the number of linearly independent long-run equilibrium relations in the system, where 0 < r < n implies r cointegrating vectors and (n - r) common stochastic trends. Identification of \beta typically involves normalization, such as setting one element to unity in each cointegrating vector to resolve indeterminacy. In interpretation, the term \Pi Y_{t-1} = \alpha \beta' Y_{t-1} measures deviations from long-run equilibrium, with \alpha indicating how each variable adjusts to these disequilibria, while the \Phi_i \Delta Y_{t-i} terms model short-run dynamics and allow for contemporaneous feedback among all variables. Key assumptions underlying the VECM include that the variables in Y_t are I(1) with no I(2) components, the errors \varepsilon_t are Gaussian and possibly serially correlated but with no unit roots in the error correction process, and the system may incorporate weak exogeneity for certain variables in conditional modeling. Relative to the single-equation error correction model, which serves as a special case for r=1 with focus on one dependent variable, the VECM offers advantages in capturing system-wide dynamics and multiple equilibria, making it particularly suitable for policy analysis in interdependent economic systems.

Estimation Methods

Engle-Granger Two-Step Procedure

The Engle-Granger two-step procedure provides a straightforward method for estimating a single-equation error correction model (ECM) in the presence of cointegration between non-stationary time series variables. Developed as part of the foundational work on cointegration, it relies on ordinary least squares (OLS) regressions to first identify the long-run equilibrium and then model short-run dynamics with error correction. In the first step, the long-run cointegrating relationship is estimated via OLS on the static regression: y_t = \beta x_t + u_t where y_t and x_t are I(1) variables, \beta is the cointegrating parameter, and u_t represents deviations from equilibrium. The residuals \hat{u}_t from this regression are then tested for stationarity using an Augmented Dickey-Fuller (ADF) test (or similar unit root test) on the equation: \Delta \hat{u}_t = \rho \hat{u}_{t-1} + \sum_{i=1}^{k} \gamma_i \Delta \hat{u}_{t-i} + \epsilon_t, with the null hypothesis \rho = 0 indicating no cointegration; rejection confirms the existence of a cointegrating relation. Critical values for this test must account for the generated regressor nature of \hat{u}_t, often using specialized distributions. The second step involves estimating the ECM using OLS on the differenced form: \Delta y_t = \alpha + \sum_{i=1}^{p} \beta_i \Delta y_{t-i} + \sum_{j=0}^{q} \gamma_j \Delta x_{t-j} + \lambda \hat{u}_{t-1} + \varepsilon_t, where \hat{u}_{t-1} are the lagged residuals from step 1 serving as the error correction term, \lambda (expected to be negative and significant) measures the speed of adjustment to equilibrium, and lags p and q are selected based on information criteria to ensure residuals are white noise. This step captures both short-run changes in the variables and the long-run equilibrium correction. This procedure offers key advantages in its simplicity and accessibility, requiring only standard OLS software without the need for full-system estimation or advanced optimization techniques, making it suitable for applied analysis of bivariate relations. However, it has notable limitations, including bias in the single-equation framework that can lead to inefficient estimates, particularly in small samples; invalid standard errors for the long-run \beta due to the two-stage nature and endogeneity issues; and restriction to a single cointegrating relation, overlooking potential multivariate dynamics. These issues can result in poor performance for inference on the cointegrating vector and adjustment speeds. Following estimation, the significance of \lambda is assessed via a t-test, where a value less than -2 (approximately) at conventional levels supports the error correction mechanism. As an alternative addressing pre-testing requirements, the autoregressive distributed lag (ARDL) bounds testing approach can be considered for robustness, though it extends beyond the residual-based focus here.

Johansen Maximum Likelihood Estimation

The Johansen maximum likelihood estimation procedure provides a full-information approach to estimating vector error correction models (VECMs) for multivariate cointegrated systems. It begins by reparameterizing a vector autoregressive (VAR) model in levels, typically of order k, as a VECM: \Delta Y_t = \Pi Y_{t-1} + \sum_{i=1}^{k-1} \Gamma_i \Delta Y_{t-i} + \mu + \epsilon_t, where Y_t is an n \times 1 vector of I(1) variables, \Pi = \alpha \beta' with \alpha and \beta being n \times r matrices of adjustment speeds and cointegrating vectors (rank r < n), \Gamma_i = - (I_n - A_1 - \cdots - A_i), \mu is a constant term, and \epsilon_t \sim N(0, \Omega). This transformation imposes the cointegration restriction on the long-run matrix \Pi, assuming Gaussian errors for likelihood maximization. The estimation maximizes the Gaussian likelihood function for the VECM, which can be concentrated by substituting the short-run dynamics and using reduced-rank regression on \Pi. Specifically, the procedure involves regressing \Delta Y_t on lagged differences \Delta Y_{t-i} to obtain residuals R_{0t}, and regressing Y_{t-1} on the same lagged differences to obtain residuals R_{1t}. The cointegrating relations are then estimated via the eigenvalue decomposition of the matrix \hat{\Pi} = R_0 R_1' (normalized by residual covariances), where the eigenvectors corresponding to the r largest eigenvalues yield \hat{\beta}, and \hat{\alpha} is obtained from a subsequent regression. This reduced-rank approach ensures efficient estimation under the cointegration hypothesis. To determine the cointegration rank r, the procedure employs sequential likelihood ratio tests: the trace test and the maximum eigenvalue test. The trace test statistic for the null hypothesis of at most r cointegrating relations is \lambda_{\text{trace}}(r) = -T \sum_{i=r+1}^{n} \log(1 - \hat{\lambda}_i), where T is the sample size and \hat{\lambda}_i are the ordered eigenvalues from the reduced-rank regression (\hat{\lambda}_1 \geq \cdots \geq \hat{\lambda}_n). The maximum eigenvalue test for the null of rank r against the alternative of rank r+1 is \lambda_{\max}(r, r+1) = -T \log(1 - \hat{\lambda}_{r+1}). Testing proceeds sequentially from r=0 upward until the null is not rejected, using critical values derived from the asymptotic distribution under the null (accounting for deterministic terms like intercepts). These tests provide a framework for inference on r. Identification of the cointegrating vectors requires normalization and restrictions on \beta, as the decomposition \Pi = \alpha \beta' is unique only up to rotation. Typically, each column of \beta is normalized by setting one coefficient to 1 (e.g., on the endogenous variable of interest), ensuring uniqueness for r=1. For r > 1, just-identification imposes r(n - r) linear restrictions, such as zero restrictions on certain elements of \beta or \alpha, while over-identification allows testing economic hypotheses via likelihood ratio statistics. These restrictions maintain the just-identified structure for efficient estimation. The Johansen procedure offers several advantages, including asymptotic efficiency from full , which provides consistent standard errors, confidence intervals for cointegrating parameters, and tests for linear restrictions on \alpha and \beta. Unlike methods, it directly handles multivariate systems with multiple cointegrating relations and avoids from preliminary estimations. Extensions accommodate I(2) processes by modeling cointegration, testing for I(2) via additional reduced-rank conditions on second differences. Implementations of the Johansen procedure are available in statistical software, such as the urca and tsDyn packages in for reduced-rank estimation and rank testing, and the vec and vecrank commands in , which support various deterministic term specifications.

Historical Development

Early Foundations

The foundations of the error correction model () trace back to early efforts in time series analysis to address non-stationarity and dynamic relationships in economic data. In the late 1960s, Clive Granger's work on emphasized the importance of filtering non-stationary components in econometric models to avoid misleading inferences about causal directions between . Concurrently, the Box-Jenkins methodology introduced autoregressive integrated moving average () models, which advocated differencing non-stationary series to achieve stationarity, laying groundwork for handling integrated processes in . These approaches highlighted the pitfalls of ignoring unit roots but did not yet integrate long-run concepts. Earlier precursors to the ECM appeared in macroeconomic modeling during the 1960s. John Denis Sargan's 1964 analysis of wages and prices in the employed a partial adjustment mechanism, where deviations from long-run influenced short-run , effectively introducing an error correction term to capture adjustment speeds in models. This idea resurfaced in the amid economic turbulence from the energy crises, which exposed limitations in static models and spurred demand for dynamic frameworks to model and supply shocks in macroeconometrics. James Davidson, David Hendry, and colleagues extended this in 1978 through a buffer stock model of consumer expenditure, incorporating partial adjustment to income changes as a mechanism for equilibrating deviations from target levels. The 1980s marked the emergence of theory, directly informing the . Granger's 1981 paper explicitly proposed correction representations for economic , arguing that linear combinations of non- variables could represent long-run equilibria, with short-run adjustments correcting disequilibria. James Stock's further underscored the issue of spurious regressions in non- data, showing that estimators of parameters converge super-consistently under , thus necessitating correction formulations to mitigate biases. Culminating these developments, Engle and Granger's seminal 1987 paper formalized as the existence of correction terms in vector autoregressions, providing the representational linking ECMs to integrated processes and enabling rigorous testing of long-run relationships.

Key Advancements

Building on the Engle-Granger two-step procedure for single-equation models, key advancements in error correction modeling emerged in the late 1980s and early 1990s with the development of multivariate frameworks. Their work on cointegration was recognized with the 2003 Nobel Prize in Economic Sciences. Søren Johansen introduced the vector error correction model (VECM) in 1988, providing a system-based approach to cointegration analysis within vector autoregressive processes. This formulation allowed for the estimation of multiple cointegrating relationships and the testing of linear hypotheses on cointegration vectors using maximum likelihood methods. In his 1991 work, Johansen extended this by deriving likelihood ratio tests for determining the cointegration rank and establishing the asymptotic distributions of these tests under non-standard conditions. These contributions shifted the focus from univariate to multivariate dynamics, enabling more comprehensive modeling of long-run equilibria and short-run adjustments in economic systems. Peter C. B. Phillips advanced the theoretical foundations in 1991 by analyzing non-standard asymptotic distributions for estimators in multivariate settings. His work highlighted the biases in conventional estimators and proposed fully modified ordinary (FM-OLS) as a bias-corrected , originally detailed in collaboration with Bruce E. Hansen in 1990, to achieve super-consistent and asymptotically efficient inference. These innovations improved the reliability of long-run parameter estimates in the presence of endogenous regressors and unit roots. In 1995, Hiro Y. Toda and Taku Yamamoto proposed a method for testing directly in levels using augmented vector autoregressions, bypassing the need for pre-testing for unit roots or . This approach ensures valid inference even under or , making it robust for applied econometric analysis without assuming knowledge of the integration order. The 2000s saw extensions to nonlinear and contexts. Bruce E. Hansen and Byeongseon Seo developed the threshold vector error correction model in 2002, allowing the adjustment speed toward equilibrium to vary based on the deviation's magnitude, capturing asymmetric dynamics in economic relationships. For panel settings, Peter Pedroni provided critical values and test statistics for in heterogeneous panels with multiple regressors in 1999, facilitating analysis across cross-sectional units with differing long-run relationships. Post-2010 developments incorporated Bayesian methods and techniques. Bayesian approaches to VECMs, such as those allowing time-varying cointegration ranks, enable probabilistic on model and , as exemplified in Koop, Leon-Gonzalez, and Strachan's framework. integrations for variable selection in VECMs have enhanced model specification in high-dimensional data, with applications extending to by 2023, where analysis combined with neural networks uncovers nonlinear impacts of variables on outcomes like yields. Further advancements by 2024 include hybrid ECM-LSTM models for predicting financial , improving short-term forecasting accuracy. These advancements marked a transition from ad-hoc, single-equation methods to rigorous, system-wide modeling, fostering widespread adoption in for analyzing equilibrium relationships in nonstationary data.

Applications and Examples

Economic Interpretations

In error correction models (ECMs), the vector captures long-run equilibrium relationships among economic variables, representing stable economic relations that hold despite short-term fluctuations. For instance, in the context of (PPP), the between exchange rates and relative price levels indicates a long-run tendency for currencies to adjust such that goods cost the same across countries, after for transportation and other costs. Similarly, Okun's law posits a relation between output and , where deviations from this equilibrium reflect cyclical gaps, with the cointegrating coefficient estimating the long-run trade-off between growth and employment. These vectors provide economic insight into fundamental balances, such as supply-demand equilibria, by delineating the steady-state proportions that economic agents aim to maintain over time. The error correction term in ECMs quantifies the short-run adjustment speed toward long-run , illustrating how economies respond to disequilibria. A negative and significant on this term signifies that deviations from are partially corrected in subsequent periods, with the indicating the pace of —for example, a of -0.3 implies about 30% correction per period. In vector ECMs (VECMs), functions derived from the model trace the dynamic effects of shocks, such as how a supply disruption propagates through prices and quantities before is restored. This framework reveals the partial adjustment mechanisms in economic systems, where short-run dynamics incorporate both transitory shocks and the pull back to long-run relations. ECMs offer key policy implications by enabling analysis of disequilibria, particularly in contexts. For example, in money demand models, the error correction term highlights deviations from long-run money-output balances, guiding central banks on adjustments to restore without overreacting to short-run noise. Moreover, in cointegrated systems, ECMs and VECMs outperform unrestricted (VAR) models for forecasting due to their incorporation of equilibrium constraints. This superiority aids policymakers in simulating scenarios, such as the impact of fiscal shocks on , while maintaining consistency with economic theory. Common applications of ECMs span diverse economic domains. In demand-supply models, they model how excess demand for money influences reserves accumulation in developing economies, capturing both long-run balance and short-run reserve adjustments. For the term structure of interest rates, VECMs analyze among yields of different maturities, revealing how short-rate expectations drive long-rate movements and inform pricing under the expectations . In commodity markets, ECMs link prices to macroeconomic factors like interest rates, estimating adjustment speeds to global demand shocks and aiding in price stabilization policies. Despite their strengths, ECMs assume linear adjustment processes, which may not capture nonlinear responses observed in real economies, such as threshold effects in or . Additionally, results are sensitive to model specification, including lag length and rank choices, potentially leading to biased estimates if misspecified.

Numerical Example

To illustrate the practical implementation of the Engle-Granger two-step procedure in a single-equation error correction model (ECM), consider a bivariate example using simulated quarterly data for log real GDP (y_t) and log real consumption (x_t), each with 100 observations. Both series are integrated of order one (I(1)), as confirmed by Augmented Dickey-Fuller (ADF) tests failing to reject the unit root null at the 5% level in levels but rejecting in first differences. However, they are , reflecting a long-run relationship typical in macroeconomic applications. The first step estimates the cointegrating regression: x_t = \alpha + \beta y_t + u_t. The ordinary least squares (OLS) estimate yields \hat{\beta} \approx 0.8 ( = 12.5, p < 0.01), implying consumption is approximately 80% of GDP in the long run. The residuals \hat{u}_t = x_t - \hat{\alpha} - 0.8 y_t are tested for stationarity using an ADF test, which rejects the unit root null (test statistic = -4.2, p < 0.05 using Engle-Granger critical values), confirming cointegration. In the second step, the ECM is estimated via OLS on first-differenced data including the lagged residual: \Delta x_t = \gamma + \lambda \hat{u}_{t-1} + \sum \phi_i \Delta x_{t-i} + \sum \delta_j \Delta y_{t-j} + \varepsilon_t, with lags selected by information criteria (e.g., one lag each). The key estimate is \hat{\lambda} \approx -0.3 (t-statistic = -3.2, p < 0.01), which is negative and significant as required for stability. Short-run coefficients include \hat{\phi}_1 \approx 0.2 (t = 1.8, p = 0.07) for lagged consumption growth and \hat{\delta}_0 \approx 0.4 (t = 4.1, p < 0.01) and \hat{\delta}_1 \approx 0.1 (t = 1.2, p = 0.23) for contemporaneous and lagged GDP growth. This implies that 30% of any disequilibrium from the long-run relationship is corrected each quarter, with consumption growth positively influenced by current GDP growth in the short run while showing mild persistence from its own lags. The following table summarizes the ECM coefficients and t-statistics:
CoefficientEstimatet-statisticp-value
Intercept (\gamma)0.010.50.62
Error correction (\lambda)-0.30-3.2<0.01
\Delta x_{t-1} (\phi_1)0.201.80.07
\Delta y_t (\delta_0)0.404.1<0.01
\Delta y_{t-1} (\delta_1)0.101.20.23
Diagnostics include a Lagrange multiplier (LM) test for residual autocorrelation (F-statistic = 1.1, p = 0.30 at lag 1, no evidence of serial correlation) and Jarque-Bera test for normality (statistic = 1.5, p = 0.47, residuals approximately normal). The ECM adjusted R-squared is 0.45, higher than the 0.32 for an unrestricted vector autoregression (VAR) in first differences, indicating improved fit from incorporating the long-run relation. For extension to a vector error correction model (VECM), consider a trivariate case adding log investment as a third I(1) series, forming a system with potential multiple cointegrating relations. The Johansen maximum likelihood procedure determines the cointegration rank r=1, as the trace test rejects the null of r=0 (trace statistic = 32.5 > 95% critical value of 29.68) but fails to reject r \leq 1 (trace = 12.3 < 15.41). This yields a VECM with one error correction term driving adjustments across all three variables, capturing richer multivariate dynamics while maintaining the bivariate cointegration as a special case.

References

  1. [1]
    Co-Integration and Error Correction: Representation, Estimation ...
    The paper presents a representation theorem based on Granger (1983), which connects the moving average, autoregressive, and error correction representations ...
  2. [2]
    [PDF] The Error Correction Model
    This is the characteristic “error correction” specification, where the change in one variable is related to the change in another variable, as well as the gap ...
  3. [3]
    [PDF] Error-Correction Models Lecture | Laurent Ferrara
    Error-Correction Models Lecture. Definitions. Definition of cointegrated series. The previous definition can be generalized to n series. (y1 t ,...,yn t ) = yt ...
  4. [4]
    [PDF] Econ 582 Univariate Stationary Time Series
    Definition 1 Strict stationarity. A stochastic process { }. ∞ =1 is ... 2. } is also strictly stationary. Page 7. Definition 2 Covariance (Weak) stationarity.
  5. [5]
    [PDF] 1. Stochastic Processes and Stationarity - DSpace@MIT
    Definition 1.4 (Strict Stationarity). A random sequence {Xt}o. t=1 is strictly stationary if the. finite dimensional distributions are translation invariant, i ...
  6. [6]
    [PDF] Lecture 13 Time Series: Stationarity, AR(p) & MA(q)
    2nd order (weak) stationarity is weaker. Weak stationarity only considers means & covariances (easier to verify in practice). • Moments describe a distribution.
  7. [7]
    [PDF] Unit Root Tests
    Unit root tests determine if data should be first differenced or regressed to remove trends, testing if the autoregressive polynomial has a root equal to unity.Missing: original | Show results with:original
  8. [8]
    [PDF] Unit Roots
    But x and y are independent! Page 37. Spurious Regression. • This is not an accident. • It happens whenever you regress a random walk on another. • Traditional ...Missing: seminal paper
  9. [9]
    Spurious regressions in econometrics - ScienceDirect.com
    Newbold and Granger, 1974. P. Newbold, C.W.J. Granger. Experience with forecasting univariate time series and the combination of forecasts. J.R. Statist. Soc ...Missing: original | Show results with:original
  10. [10]
    Distribution of the Estimators for Autoregressive Time Series With
    The paper studies the distribution of estimators for autoregressive time series with a unit root, where the model is Yt = ρYt-1 + et, and derives limit ...
  11. [11]
    [PDF] Unit Root & Augmented Dickey-Fuller (ADF) Test
    And now a test of ϕ =1 is a simple t-test of whether the parameter on the “lagged level” of y is equal to zero. This is called a Dickey-Fuller test. Page 11 ...
  12. [12]
    [PDF] Testing for a unit root in time series regression
    This paper proposes new nonparametric tests for detecting unit roots in time series models, including drift and trends, using functional weak convergence ...
  13. [13]
    [PDF] Lecture 6a: Unit Root and ARIMA Models - Miami University
    Unit root may cause three troubles. First, E(yt) may not be a constant. Second, the variance of yt is non-constant. Third, the serial correlation between yt ...Missing: consequences | Show results with:consequences
  14. [14]
    [PDF] CO-INTEGRATION AND ERROR CORRECTION ...
    A theorem showing precisely that co-integrated series can be represented by error correction models was originally stated and proved in Granger (1983). The ...
  15. [15]
    [PDF] Cointegration
    Johansen proposes a sequential testing procedure that consistently deter- mines the number of cointegrating vectors. First test H0(r0 = 0) against. H1(r0 > 0).
  16. [16]
    [PDF] 1 Cointegration.
    Engle and Granger (1987) compared different tests and recommended the CADF test. They supplied critical values based on Monte Carlo simulations for the case of ...Missing: original | Show results with:original
  17. [17]
    Pitfalls in testing for long run relationships - ScienceDirect
    This paper analyzes the robustness of the two most commonly used cointegration tests: the single equation based test of Engle and Granger (EG) and the system ...
  18. [18]
    [PDF] The Engle-Yoo Critical Values - Applied Econometric Time Series
    The method has no systematic procedure for the separate estimation of the multiple cointegrating vectors. • 3. Another serious defect of the Engle-Granger ...
  19. [19]
    Bounds Testing Approaches to the Analysis of Level Relationships
    29 Note that the ARDL approach advanced in Pesaran and Shin (1999) is applicable irrespective of whether the regressor are purely I(0), purely 1(1) or mutually ...
  20. [20]
    Statistical analysis of cointegration vectors - ScienceDirect.com
    We then derive the maximum likelihood estimator of the space of cointegration vectors and the likelihood ratio test of the hypothesis that it has a given number ...
  21. [21]
    MAXIMUM LIKELIHOOD ESTIMATION AND INFERENCE ON ...
    With an Application to the Demand for Money in Denmark and Finland ...
  22. [22]
    Investigating Causal Relations by Econometric Models and Cross ...
    This paper uses econometric models and cross-spectral methods to investigate causality and feedback, defining them explicitly and testably.
  23. [23]
    Time series analysis; forecasting and control : Box, George E. P
    Apr 8, 2019 · Time series analysis; forecasting and control ; Publication date: 1970 ; Topics: Feedback control systems -- Mathematical models, Prediction ...
  24. [24]
    [PDF] Denis Sargan: Some Perspectives - LSE Research Online
    [23] Sargan, J.D. (1964) Wages and prices in the U.K.: a study in econo- metric methodology. In P.E. Hart et al. (eds.) Econometric Analysis for. National ...
  25. [25]
    [PDF] A Short History of Macro-econometric Modelling - Nuffield College
    Jan 20, 2020 · Empirical macroeconomic- system modelling began with the Keynesian revolution, was facilitated by the development of National. Accounts and the ...Missing: energy | Show results with:energy
  26. [26]
    Some properties of time series data and their use in econometric ...
    Davidson J.E.H. et al., 1978. J.E.H. Davidson, D.F. Hendry, F. Srba, S. Yeo. Econometric modelling of the aggregate time-series relationship between ...
  27. [27]
    Asymptotic Properties of Least Squares Estimators of Cointegrating ...
    Sep 1, 1987 · These asymptotic representations form the basis for simple and fast Monte Carlo calculations of the limiting distributions of these estimators.Missing: 1984 | Show results with:1984
  28. [28]
    Estimation and Hypothesis Testing of Cointegration Vectors in ...
    Nov 1, 1991 · The purpose of this paper is to present the likelihood methods for the analysis of cointegration in VAR models with Gaussian errors, seasonal dummies, and ...
  29. [29]
    Optimal Inference in Cointegrated Systems - The Econometric Society
    Mar 1, 1991 · This paper studies the properties of maximum likelihood estimates of cointegrated systems. Alternative formulations of such models are ...
  30. [30]
    Statistical Inference in Instrumental Variables Regression with I(1 ...
    This paper studies the asymptotic properties of instrumental variable (IV) estimates of multivariate cointegrating regressions and allows for deterministic and ...
  31. [31]
    Statistical inference in vector autoregressions with possibly ...
    This paper shows how to estimate VARs in levels and test restrictions even if processes are integrated or cointegrated, using a lag selection procedure. ...
  32. [32]
    Critical Values for Cointegration Tests in Heterogeneous Panels ...
    Dec 10, 2002 · Abstract. Asymptotic distributions and critical values are computed for several residual-based tests of the null of no cointegration in panels ...
  33. [33]
    [PDF] robustly modeling the nonlinear impact of climate change on ...
    In this work, we propose to couple the econometric techniques of cointegration analysis and. Granger causality with machine learning, thereby combining the ...
  34. [34]
    Purchasing Power Parity in the Long Run: A Cointegration Approach
    PURCHASING POWER PARITY (PPP) is commonly interpreted as the comovement of the exchange rate and the relative price level of two countries.
  35. [35]
    Okun's law, cointegration and gap variables - ScienceDirect
    If output and unemployment are cointegrated, and we equate Okun's “potential” magnitudes with the stochastic trend or “permanent” component in output and ...
  36. [36]
    [PDF] Error Correction Model (ECM) - Lex Jansen
    Part II also introduced Engle and. Granger's (1987) two-step procedure to test for cointegration between two variables x and y. Discussion and simple techniques ...
  37. [37]
    [PDF] Interpreting Cointegrated Models - Harvard DASH
    An error-correction model can then be thought of as a description of the stochastic process by which the economy eliminates or corrects the equilibrium error.
  38. [38]
    [PDF] An Error-Correction Model of U.S. M2 Demand
    In this ap- proach, a long-run equilibrium money demand model. (cointegrating regression) is first fit to the levels of the variables, and the calculated ...
  39. [39]
    A Comparison of the Forecasting Ability of ECM and VAR Models
    tests for cointegration, the ECM model produces forecasts with 20% less error at the twelve month forecasting horizon than any of the alternative. VAR or ...
  40. [40]
    Applying the seasonal error correction model to the demand for ...
    The level of international reserves are expected to rise (fall) if there is an excess demand (supply) for money. International reserves are therefore viewed as ...
  41. [41]
    [PDF] Expectations and the Term Structure of Interest Rates
    The expectations theory suggests long rates are governed by expected future short rates. Long and short rates share a common trend, but the spread between them ...
  42. [42]
    On Primary Commodity Prices: The Impact of Macroeconomic ...
    Recently, the error correction model (ECM) was introduced in world commodity markets to interpret the short-run and long-run fluctuations of commodity prices.
  43. [43]
    [PDF] Conditional and Structural Error Correction Models
    Short-run dynamics and all 10 Page 13 four cointegrating relations are included in a single-equation model of infla- tion, with the feedback coefficients on ...Missing: formulation seminal