Fact-checked by Grok 2 weeks ago

Order of integration

In time series analysis, the order of integration of a process is the minimum number of times the series must be differenced to become . A process that requires d differences to achieve stationarity is said to be integrated of d, denoted as I(d). For example, a series such as is I(0), while a with a is I(1). This concept is fundamental in for modeling non-stationary data, such as economic and financial , where understanding the integration order helps avoid spurious regressions and informs techniques like analysis.

Basic Concepts

Definition of Order of Integration

In analysis, a \{X_t\} is defined as integrated of order d, denoted I(d), if the d-th of the series is covariance stationary while lower-order differences are not, with d being a non-negative . This measure quantifies the degree of non-stationarity in the series, indicating the minimum number of differencing operations required to transform it into a with constant mean, variance, and autocovariances that depend only on the . Processes classified as I(0) are already covariance stationary, requiring no differencing, which distinguishes them from higher-order integrated series that exhibit persistent trends or random walks. The formal mathematical representation employs the L, defined such that L X_t = X_{t-1}, allowing higher lags via powers like L^k X_t = X_{t-k}. The difference is \Delta X_t = X_t - X_{t-1} = (1 - L) X_t, and the d-th order difference generalizes to \Delta^d X_t = (1 - L)^d X_t. For a series X_t \sim I(d), the expression (1 - L)^d X_t yields a , capturing the cumulative effect of shocks that propagate indefinitely in lower-differenced forms. The concept of order of integration emerged in during the 1970s and 1980s, extending the (ARIMA) framework introduced by Box and Jenkins in their seminal 1970 work on time series forecasting. Granger formalized the "degree of integration" in 1981, providing a precise summary statistic for non-stationarity that facilitated advancements in modeling economic variables often characterized by unit roots and persistent dynamics. This development built directly on ARIMA's differencing parameter d, shifting focus from ad hoc transformations to a structured of integration orders.

Stationarity and Differencing

A process is defined as weakly , also known as covariance or second-order , if its () is constant over time, its variance is finite and constant, and the between any two points depends solely on the time lag between them rather than their absolute positions in time. This definition ensures that the statistical properties of the series do not systematically change, allowing for reliable modeling of dependencies. Non-stationarity in time series manifests in distinct ways, primarily as trend-stationarity or difference-stationarity. In a , deviations from a deterministic trend (such as a linear or function of time) are stationary, meaning the series can be rendered stationary by subtracting the trend. In contrast, a difference-stationary process, often termed an integrated process, exhibits persistent shocks that do not revert to a fixed and requires differencing to achieve stationarity; this form is prevalent in macroeconomic data where unit roots induce random wanderings. Differencing plays a crucial role in transforming non-stationary series into stationary ones by eliminating trends or stochastic drifts. The first difference, defined as \Delta X_t = X_t - X_{t-1}, removes a linear trend or a unit root, stabilizing the mean of an integrated process. For series with higher-order trends, such as quadratic, second- or higher-order differencing (\Delta^2 X_t = \Delta (\Delta X_t)) is applied iteratively until stationarity is attained. The unit root concept describes a non-stationary autoregressive process of order 1 (AR(1)) where the autoregressive coefficient equals 1, as in X_t = X_{t-1} + \epsilon_t + \mu, causing innovations to have permanent effects and variance to grow over time. This structure, tested via procedures like the Dickey-Fuller test, equates to integration of order 1 and distinguishes difference-stationary behavior from trend-stationarity. A prototypical example of a process is the simple , given by X_t = X_{t-1} + \epsilon_t, where \{\epsilon_t\} is with zero and constant variance; this process is non- but becomes stationary after first differencing, exemplifying an I(1) series.

Integer-Order Processes

First-Order Integration I(1)

A first-order integrated process, denoted I(1), is a non-stationary that requires differencing once to become stationary, or I(0). Key properties include a non-constant , which may incorporate a drift term leading to a stochastic trend, unbounded variance that increases over time, and autocorrelations that decay very slowly due to persistent dependence between observations. These characteristics imply that I(1) processes exhibit random wandering behavior without reverting to a fixed level, making them common in economic data such as GDP or stock prices where shocks have permanent effects. The canonical example of an I(1) process is the random walk, which can include a drift component to model systematic trends. Without drift, it follows X_t = X_{t-1} + \epsilon_t, where \epsilon_t is white noise with mean zero and variance \sigma^2. With drift, the model becomes X_t = \mu + X_{t-1} + \epsilon_t, where \mu is the constant drift parameter. In this case, the expected value is E[X_t] = \mu t + E[X_0], producing a linear trend in expectation while retaining the stochastic fluctuations of the random walk. The variance of the partial sums for an I(1) process grows linearly with time, specifically \text{Var}(X_t) \approx t \sigma^2 for the driftless case, highlighting the accumulating uncertainty over longer horizons. A critical implication of I(1) processes is the risk of spurious regression, where regressing two independent I(1) series on each other produces misleadingly high coefficients of determination (R^2) and statistically significant t-statistics purely by chance, undermining standard inference. This occurs because the non-stationarity amplifies apparent correlations that do not reflect causal relationships, necessitating pre-testing for unit roots before applying . In the ARIMA framework, an I(1) process is modeled as ARIMA(p,1,q), where the integration order d=1 indicates that first differencing transforms the series into a stationary ARMA(p,q) process suitable for further autoregressive and moving average modeling. This differencing step addresses the unit root, allowing forecasts of the original series by integrating the stationary differences.

Higher-Order Integration I(d) for d > 1

Higher-order integrated processes of order d > 1, denoted I(d), extend the of beyond the first-order case by requiring multiple differencings to achieve . Specifically, a process X_t is I(d) if the (d-1)-th difference (1 - L)^{d-1} X_t is I(1), and the full d-th difference (1 - L)^d X_t is . This structure builds on I(1) processes as foundational building blocks, where successive integrations accumulate non-stationarity. A general mathematical form for such processes is X_t = X_{t-1} + Y_{t-1}, where Y_t is an I(1) process, implying that X_t integrates the non-stationary increments of Y_t. Properties of I(d) for d > 1 include even slower mean reversion compared to I(1), as shocks propagate with greater persistence across multiple lags. The variance of the process grows as t^{2d-1}, leading to explosive uncertainty over time; for instance, in an I(2) process, this growth is cubic (O(t^3)). Over-differencing to achieve stationarity can introduce non-invertible components, complicating model and . An illustrative example is the double-integrated I(2) process, which arises as the cumulative sum of a , often modeled in physical or contexts—such as as I(2) when is I(1). In , I(d) processes with d > 1 are rare, as empirical data seldom support orders beyond 2, and higher differencing risks over-differencing that obscures long-run relationships and information. Such processes may also be misinterpreted as evidence of structural breaks due to their pronounced trend accelerations.

Construction Methods

Building from Stationary Series

One common method to construct an integrated time series of order d, denoted I(d), begins with a I(0) and applies successive differencing in reverse through cumulative . Specifically, to obtain an I(d) series from an I(d-1) series X_t, define the integrated as Z_t = Z_{t-1} + X_t with Z_0 = 0, iterating this d times starting from the base. This approach ensures the resulting series requires d differences to return to . A practical example illustrates this iterative . Start with \epsilon_t \sim N(0, \sigma^2), which is I(0). Integrating once yields a Z_t = Z_{t-1} + \epsilon_t, an I(1) process characterized by a . Integrating again produces an I(2) series W_t = W_{t-1} + Z_t, where the variance grows cubically over time (proportional to t^3). Such simulations are essential for understanding the behavior of higher-order integrated processes in econometric modeling. Deterministic trends can be incorporated during to reflect real-world non-stationarities like . For an I(1) with drift, modify the to Z_t = Z_{t-1} + \mu + \epsilon_t, where \mu is the drift parameter; the then follows a linear trend E[Z_t] = \mu t. Higher-order trends, such as , arise from integrating a linear trend in the differences, adding to simulated series for trend-stationary versus difference-stationary distinctions. Software tools streamline these constructions for empirical analysis. In R, the cumsum() function applied to a vector of stationary innovations, such as cumsum(rnorm(n)), directly generates an I(1) random walk. Python's NumPy library offers analogous functionality via numpy.cumsum(), enabling quick simulation of integrated series from ARMA or white noise inputs. This building approach traces back to foundational work in time series, including early simulations in the Box-Jenkins methodology, where integrated ARIMA processes were generated to aid model identification through autocorrelation function patterns.

Mathematical Formulations

The mathematical foundation of integrated processes relies on the lag operator L, defined such that L X_t = X_{t-1}, allowing compact representation of temporal dependencies in time series. A time series \{X_t\} is integrated of order d, denoted I(d), if the d-th difference \Delta^d X_t = (1 - L)^d X_t is a stationary process u_t, typically following an ARMA model. This formulation, (1 - L)^d X_t = u_t, encapsulates the differencing required to achieve stationarity, where d is a non-negative integer for integer-order processes. Inverting this relation yields the integrated form X_t = (1 - L)^{-d} u_t, representing the d-th cumulative sum of the underlying stationary innovations u_t. For d = 1, the binomial theorem provides the expansion (1 - L)^{-1} = \sum_{k=0}^{\infty} L^k, implying an infinite moving average (MA(\infty)) structure X_t = X_0 + \sum_{k=0}^{\infty} u_{t-k}, where shocks persist indefinitely without decay. For higher orders d > 1, the generalized binomial expansion generalizes this: for instance, (1 - L)^{-2} = \sum_{k=0}^{\infty} (k+1) L^k, with coefficients increasing linearly in k, amplifying the cumulative effect of past shocks and leading to polynomial trends in variance. In general, the coefficients in (1 - L)^{-d} = \sum_{k=0}^{\infty} \binom{k + d - 1}{d - 1} L^k grow as k^{d-1}, underscoring the explosive persistence in higher-order integrations. This operator framework connects directly to stochastic difference equations, such as the AR(1) model with a X_t = X_{t-1} + u_t, or equivalently (1 - L) X_t = u_t. The general solution is the particular plus the homogeneous solution, yielding X_t = X_0 + \sum_{j=1}^t u_j, a that exemplifies I(1) through non-decaying accumulation. For higher d, analogous solutions involve repeated summations, reinforcing the theoretical of these polynomials in modeling non-stationarity.

Fractional Integration

Definition and Fractional Differencing

Fractional integration extends the concept of integer-order integration to non-integer orders d > 0, denoted as I(d), allowing processes to exhibit intermediate levels of persistence between short-memory processes (d = 0) and non-stationary processes (d = 1). For $0 < d < 0.5, the process is with long-memory behavior, characterized by slowly decaying autocorrelations and mean reversion. In contrast, when d \geq 0.5, the process is non-stationary, though it remains mean-reverting if d < 1. The fractional differencing operator, which transforms an I(d) process into a stationary I(0) process, is defined as (1 - L)^d, where L is the backward-shift operator such that L X_t = X_{t-1}. This operator expands to an infinite : (1 - L)^d = \sum_{k=0}^{\infty} \binom{d}{k} (-1)^k L^k, with generalized coefficients given by \binom{d}{k} = \frac{d (d-1) \cdots (d-k+1)}{k!} for k \geq 1 and \binom{d}{0} = 1. This formulation, introduced independently by Granger and Joyeux (1980) and Hosking (1981) to model long-memory processes, ensures the differenced series captures hyperbolic decay in autocorrelations typical of such dynamics. The (ARFIMA) model generalizes the framework by incorporating fractional differencing, specified as ARFIMA(p, d, q) where p and q denote the orders of autoregressive and components, respectively. The model equation is (1 - L)^d \Phi(L) (X_t - \mu) = \Theta(L) \varepsilon_t, with \Phi(L) = 1 - \sum_{i=1}^p \phi_i L^i the AR polynomial, \Theta(L) = 1 + \sum_{j=1}^q \theta_j L^j the MA polynomial, \mu the mean, and \varepsilon_t innovations. Hosking (1981) formalized this structure to accommodate while maintaining the invertible and properties of ARMA components for -0.5 < d < 0.5. A key example of fractional integration arises in processes related to the Hurst parameter H, a measure of long-term memory in self-similar time series. For fractional Gaussian noise, the integration order satisfies d = H - 0.5 when $0.5 < H < 1, linking higher H values (indicating persistence) to stronger long memory in the I(d) process. This relation underscores the model's utility in analyzing phenomena like river flows or financial returns exhibiting Hurst-like behavior.

Properties and Long-Memory Processes

Fractionally integrated processes exhibit when the integration order d satisfies $0 < d < 0.5, characterized by s that decay hyperbolically rather than exponentially, as in short-memory processes. Specifically, the function \rho_k behaves asymptotically as \rho_k \sim k^{2d-1} for large lags k, leading to persistent dependence over long horizons. This slow decay implies that past shocks have a prolonged influence, distinguishing these processes from traditional ARMA models where correlations diminish rapidly. In the , the f(\omega) of a fractionally integrated process diverges at low frequencies, approximating f(\omega) \sim |2 \sin(\omega/2)|^{-2d} as \omega \to 0, which underscores the concentration of variance in low-frequency components and reinforces the persistence observed in time domain autocorrelations. This property highlights the , as most of the process's energy is captured by smooth, low-frequency trends rather than high-frequency fluctuations. A distinctive feature of fractional integration is its aggregation property: the sum of independent I(d) processes remains I(d), preserving the long-memory parameter across linear combinations, in contrast to short-memory processes where aggregation typically maintains short memory. This linearity ensures that the hyperbolic decay and spectral behavior are invariant under summation, facilitating modeling of aggregated economic or financial data. For d < 0 (specifically -0.5 < d < 0), fractionally integrated processes display anti-persistence, with autocorrelations that are negative and decay hyperbolically, resulting in mean reversion faster than in short-memory models with ; this behavior is particularly useful in volatility modeling, such as in FIGARCH processes, to capture clustering without long-term persistence. These properties have been applied to model natural and economic phenomena exhibiting slow correlation decay, such as river flow persistence first noted in hydrological studies of storage, or financial like returns where long-memory effects manifest in patterns.

Testing and Estimation

Unit Root and Stationarity Tests

Unit root and stationarity tests are essential statistical procedures for determining whether a is integrated, particularly to detect the presence of a , which implies non-stationarity and of order one or higher. These tests help confirm the order of integration by evaluating the of a against alternatives of stationarity, guiding appropriate differencing or modeling in time series analysis. The Augmented Dickey-Fuller (ADF) test is a widely used method to test for a in a . It extends the original Dickey-Fuller test by including lagged differences to account for higher-order autoregressive processes in the error term. The test involves estimating the regression equation: \Delta X_t = \alpha + \beta X_{t-1} + \gamma t + \sum_{i=1}^{p} \delta_i \Delta X_{t-i} + \varepsilon_t where \Delta X_t = X_t - X_{t-1}, \alpha is the intercept (drift), \gamma t is a linear trend term (if included), and \varepsilon_t is . The is H_0: \beta = 0 (unit root, non-stationary), tested using the on \beta, against the alternative H_1: \beta < 0 (stationary). The original Dickey-Fuller was developed by Dickey and Fuller, who derived the under the unit root null. The augmentation with lags p was introduced by Said and Dickey to ensure valid even when the true is unknown. The Phillips-Perron (PP) test modifies the basic Dickey-Fuller regression by applying non-parametric corrections to the test statistic to handle serial correlation and heteroskedasticity in the errors without explicitly estimating additional parameters. It uses the same regression form as the Dickey-Fuller test but adjusts the and long-run variance estimate via Newey-West HAC standard errors. Under the null H_0: \beta = 0, the test has the same as the Dickey-Fuller statistic, but the corrections improve robustness in finite samples. This approach was proposed by and Perron as a semiparametric alternative to parametric augmentation in . In contrast, the Kwiatkowski-Phillips-Schmidt-Shin ( complements tests like and by reversing the hypotheses: it tests the null H_0: the series is (level or trend ) against the alternative H_1: a (non-). The is based on a variance ratio comparing the sample variance of partial sums to a long-run variance estimate under the stationarity null. If the KPSS statistic exceeds the , it rejects stationarity, providing evidence for . This was developed by Kwiatkowski et al. to address the low power of traditional tests against alternatives near the unit root boundary. Critical values for these tests follow non-standard asymptotic under the unit root null, distinct from standard normal or t-distributions, requiring specialized tables. For the and PP tests, Dickey and Fuller provided critical values for cases with no constant, constant only (drift), and constant plus trend, derived from simulations of the limiting Dickey-Fuller distribution. For example, in the no-constant case, the 5% is approximately -1.95 for large samples, while with trend it is around -3.45. The uses critical values from statistics under stationarity, such as 0.463 at 5% for level stationarity. These values ensure proper size control, as the tests' distributions depend on deterministic components like drifts or trends. Unit root tests like and suffer from low , particularly against alternatives where the autoregressive is close to but less than one (near-unit roots), often failing to reject the null even when the series is stationary. This issue arises because the tests' asymptotic functions converge slowly, exacerbated by finite sample biases. To mitigate, lag length p in must be selected carefully; information criteria such as (AIC) or Schwarz Information Criterion (SIC) are commonly used, balancing model fit and parsimony, though SIC tends to select shorter lags for better properties. Ng and Perron recommended modifications to these criteria for improved in near-unit cases.

Methods to Determine Integration Order

Determining the order of integration d for values typically involves sequential application of the test, starting with the assumption of d=0 (stationarity) and progressively differencing the series until the of a is rejected in favor of stationarity. This process begins by testing the original series for a unit root using the regression; if the test fails to reject the null, d=0; otherwise, the first difference is tested, incrementing d until stationarity is confirmed, often up to a maximum order like d=2 to avoid over-differencing. While straightforward, this method relies on the power of the test and can be sensitive to lag selection and sample size. For fractional orders where $0 < d < 1, the Geweke-Porter-Hudak (GPH) estimator provides a semiparametric approach by regressing the log-periodogram at low frequencies on a transformation of the frequencies. Specifically, it estimates d from the \log f(\omega_j) = c - 2d \log \left| 2 \sin \left( \frac{\omega_j}{2} \right) \right| + \epsilon_j, where f(\omega_j) is the at frequencies \omega_j = 2\pi j / T for j=1,\dots,m with m chosen as a of the sample size T (e.g., m \approx T^{0.5}), and the slope coefficient yields -2\hat{d}. This method exploits the spectral density's behavior near zero frequency, assuming no short-memory ARMA components, and is computationally simple but can suffer from bias in small samples or when d is near boundaries. Parametric estimation of fractional d in ARFIMA models often uses maximum , which jointly optimizes all parameters including d via the exact likelihood function or approximations for efficiency. The exact ML approach, derived for stationary univariate ARFIMA processes, involves evaluating the likelihood using state-space representations or hypergeometric functions to handle the infinite AR/MA expansions. For larger samples or computational tractability, the Whittle approximation to the likelihood minimizes the discrepancy between observed and theoretical densities, providing consistent estimates of d under Gaussian assumptions. Numerical optimization techniques, such as BFGS, are typically employed to maximize the log-likelihood. In small samples, bootstrap methods enhance on d by resampling residuals or blocks from fitted models to approximate the of estimators or test statistics. For integer orders, bootstrap-sequential procedures generate empirical critical values to mitigate size distortions in or multivariate settings. For fractional d, wild or bootstraps applied to GPH or tests improve coverage and power under heteroskedasticity or non-normality, often reducing bias in the estimated long-memory parameter. A key challenge in these methods is pre-testing bias, where sequential unit root tests distort subsequent parameter estimates by altering the effective sample size or introducing dependence on arbitrary significance levels. This bias can lead to over-rejection of stationarity or inefficient ARIMA specifications; thus, joint estimation of d within full ARFIMA frameworks is recommended to avoid stepwise pitfalls and ensure consistent modeling.

Implications and Applications

In Econometric Modeling

In econometric modeling, the order of integration is fundamental to model specification, as it dictates the transformations needed to ensure stationarity and avoid invalid inferences. For univariate time series, autoregressive integrated moving average (ARIMA) models incorporate the integration order d as the differencing parameter, where the series is differenced d times—typically d=1 for I(1) processes—using the operator \Delta^d y_t = (1 - L)^d y_t (with L the lag operator) to render the data stationary before fitting autoregressive (AR) and moving average (MA) components. This approach, central to the Box-Jenkins methodology, ensures that the model's residuals are white noise and that forecasts are reliable by aligning the model with the underlying stochastic structure of the data. Extending to multivariate settings, the order of integration informs the choice between (VAR) and vector error correction models (VECM) when series exhibit common integration orders. If multiple I(1) series possess stationary linear combinations—known as cointegrating vectors—they are , implying long-run relationships; in such cases, a VECM is specified rather than a VAR in levels or differences, as it explicitly models short-run deviations from via error correction terms while preserving the levels' informational content. This framework, building on theory, prevents the loss of long-run dynamics that occurs in differenced VAR models and enhances the model's ability to capture adjustments. The implications for are pronounced for integrated processes: predictions from I(1) models, such as random walks, feature prediction intervals that widen linearly with the forecast horizon due to the non- variance, leading to greater long-term compared to series. Differencing to achieve stationarity improves precision for short-term forecasts by focusing on changes, but it yields interval estimates for differences rather than levels, necessitating reintegration for level forecasts and potentially amplifying errors over longer horizons. In , ignoring integration orders risks spurious correlations, as regressions among non-stationary series like GDP (typically I(1)) produce misleading t-statistics and R-squared values; differenced specifications are thus standard to yield valid results, exemplified in models where unemployment rates, often found to be I(1), are differenced to relate dynamics properly to labor market fluctuations. Empirically, the Nelson-Plosser (1982) dataset of U.S. macroeconomic series from 1909–1978 illustrates these principles, revealing that most variables, including real and nominal GNP, industrial production, and , are I(1) rather than stationary around a deterministic trend, underscoring the prevalence of unit roots in macro data and the necessity of integration-aware modeling to interpret economic shocks accurately.

In Financial and Economic Analysis

In financial analysis, the efficient market hypothesis (EMH) posits that stock prices follow integrated processes of order one (I(1)), modeled as random walks, which implies that price changes, or returns, are stationary (I(0)) and unpredictable based on past information. This framework supports the weak form of EMH, where future price movements cannot be forecasted using historical price data alone, as any predictable patterns would be arbitraged away. Empirical tests, such as unit root analyses on historical stock indices like the Dow Jones Industrial Average, consistently find evidence of I(1) behavior in prices, reinforcing the unpredictability central to market efficiency. Fractional integration has been applied to model long-memory effects in financial , particularly through the FIGARCH (fractionally integrated generalized ) framework, where the fractional differencing d typically ranges from approximately 0.2 to 0.5, capturing persistent clustering in squared returns without implying non-stationarity. This long-memory structure, with $0 < d < 0.5, allows shocks to volatility to decay hyperbolically rather than exponentially, better fitting empirical features of asset return volatility in and markets. For instance, estimates for the volatility series indicate d in the range of 0.22 to 0.30, highlighting the model's utility in and option by accounting for prolonged volatility persistence. In pairs trading strategies, cointegration among I(1) assets enables the identification of mean-reverting spreads, where long-term equilibrium relationships allow traders to exploit temporary deviations for profit. The Engle-Granger two-step method tests for by regressing one asset's price on another and checking stationarity in the residuals; if cointegrated, the spread reverts to its mean, supporting market-neutral positions that long the underperformer and short the outperformer. Empirical studies on equity pairs demonstrate profitable mean-reversion trades when cointegration holds. The order of integration informs the analysis of economic cycles by classifying key indicators like gross national product (GNP) and as I(1) or fractionally integrated with d < 1, which quantifies the persistence of business fluctuations and policy impacts. Seminal evidence from U.S. data (1909–1978) shows real GNP exhibiting I(1) behavior, implying that shocks have permanent effects and contribute to trends in output, rather than deterministic cycles. For , measured by consumer price indices, fractional integration with d \approx 0.7–0.9 (less than 1) suggests long but mean-reverting memory, aiding central banks in modeling inflation persistence and designing monetary policies to dampen cycles. Post-2000 developments have extended order of integration analysis to high-frequency financial , revealing I(1) with jumps in assets like prices, enhancing microstructure modeling and trading at intraday scales. tests on daily prices from 2010–2020 confirm I(1) non-stationarity in levels but stationarity in first differences, often interrupted by jumps during spikes, as seen in 2017–2018 bull runs. In high-frequency contexts, such as 1-minute equity tick post-2008, fractional integration parameters near 0.3–0.4 capture long-memory order flow persistence, informing high-speed algorithmic strategies and liquidity provision. These applications highlight evolving econometric tools for volatile, real-time markets. Recent studies as of 2025 have incorporated fractional integration to analyze post-COVID-19 dynamics, estimating higher persistence (d ≈ 0.8-1.0) in U.S. CPI during 2021-2023 due to disruptions, informing tightening policies.

References

  1. [1]
    Calculus III - Iterated Integrals - Pauls Online Math Notes
    Nov 16, 2022 · It doesn't matter which variable we integrate with respect to first, we will get the same answer regardless of the order of integration. To ...
  2. [2]
    Fubini's Theorem
    Fubini's Theorem: If f(x,y) is a continuous function on a rectangle R=[a,b]×[c,d], then the double integral ∬Rf(x,y)dA is equal to the iterated integral ...
  3. [3]
  4. [4]
    Calculus III - Double Integrals over General Regions
    Nov 16, 2022 · Example 2 Evaluate the following integrals by first reversing the order of integration. ∫ ...
  5. [5]
    Distribution of the Estimators for Autoregressive Time Series With
    Dickey and Fuller: Time Series With Unit Root 431 were fit to the data. For ... pothesis is that both roots are less than one' in absolute value. The ...Missing: AR( | Show results with:AR(
  6. [6]
    [PDF] Integration
    – In general, we say that a series is I(d) if its d'th difference is stationary. Page 2. Integrated of order d. • A series is I(d) if is stationary and ...
  7. [7]
    Trends and random walks in macroeconmic time series
    This paper investigates whether macroeconomic time series are better characterized as stationary fluctuations around a deterministic trend or as non-stationary ...
  8. [8]
    [PDF] Notes on the random walk model - Duke People
    Nov 4, 2014 · The variance of the distance traveled in k steps is sum of the variances of the individual steps (because variances are added when you add ...
  9. [9]
    Spurious regressions in econometrics - ScienceDirect.com
    Newbold and Granger, 1974. P. Newbold, C.W.J. Granger. Experience with forecasting univariate time series and the combination of forecasts. J.R. Statist. Soc ...
  10. [10]
    [PDF] Box-Jenkins modelling - Rob J Hyndman
    May 25, 2001 · The Box-Jenkins approach to modelling ARIMA processes was described in a highly in- fluential book by statisticians George Box and Gwilym ...
  11. [11]
    [PDF] Variable Trends in Economic Time Series - Princeton University
    (iv) An integrated process has a variance that tends to infinity. This is ... Variable Trends in Economic Time Series 171. Appendix. Stochastic Trends and ...<|control11|><|separator|>
  12. [12]
    [PDF] Econ 512: Financial Econometrics Time Series Concepts
    Mar 30, 2009 · In finance and economics data series are rarely modeled as I(d) process with d > 2. Page 54. Just as an I(1) process with drift contains a ...
  13. [13]
    [PDF] Over-Differencing and Forecasting with Non-Stationary Time Series ...
    In time series analysis, over-differencing is a common phenomenon to make the data to be stationary. However, it is not always a good idea to take over- ...
  14. [14]
    [PDF] Structural Breaks in Time Series∗ - Boston University
    Abstract. This chapter covers methodological issues related to estimation, testing and com- putation for models involving structural changes.
  15. [15]
    Random walk models • SOGA-R - Freie Universität Berlin
    A model for analyzing a series, which is the sum of a deterministic trend series and a stationary noise series, is the so called random walk with drift model.
  16. [16]
    4.6 Simulation | Techincal Analysis with R - Bookdown
    The function use rnorm() to generate random normal variable, and then use cumsum() to get the random walk. Note that the whole function is based on vector ...
  17. [17]
    [PDF] A Simple Estimator of Cointegrating Vectors in Higher Order ...
    The paper is organized as follows. The model and estimators are introduced in Section 2 for I(1) variables and are extended to 1(d) variables in Section 3.Missing: seminal | Show results with:seminal
  18. [18]
  19. [19]
    Fractional differencing | Biometrika - Oxford Academic
    01 April 1981. PDF. Views. Article contents. Cite. Cite. J. R. M. HOSKING, Fractional differencing, Biometrika, Volume 68, Issue 1, April 1981, Pages 165–176 ...
  20. [20]
    Long memory processes and fractional integration in econometrics
    This paper provides a survey and review of the major econometric work on long memory processes, fractional integration, and their applications in economics ...Missing: stationarity | Show results with:stationarity
  21. [21]
    [PDF] Testing for a unit root in time series regression
    A recent examination of historical economic time series by Nelson & Plosser (1982), for example, found strong evidence in favour of unit root nonstationarity ...
  22. [22]
    Testing for a unit root in time series regression - Oxford Academic
    This paper proposes new tests for detecting the presence of a unit root in quite general time series models.
  23. [23]
    How sure are we that economic time series have a unit root?
    We propose a test of the null hypothesis that an observable series is stationary around a deterministic trend.
  24. [24]
    [PDF] Lag Length Selection and the Construction of Unit Root Tests with ...
    It is widely known that when there are errors with a moving-average root close to -1, a high order augmented autoregression is necessary for unit root tests ...
  25. [25]
    [PDF] Identifying the Order ofIntegration ofTime-series Data Using ...
    In this paper, a customized SAS® macro is developed to conveniently and efficiently identify the order of integration of time-series variables. A major SAS ...Missing: seminal | Show results with:seminal
  26. [26]
    [PDF] Bootstrap sequential tests to determine the order of integration of ...
    Jan 1, 2015 · We propose an approach to investigate the unit root properties of individual units in a time series panel or large multivariate time series, ...
  27. [27]
    THE ESTIMATION AND APPLICATION OF LONG MEMORY TIME ...
    A new estimator of the long memory parameter in these models is proposed, based on the simple linear regression of the log periodogram on a deterministic ...
  28. [28]
    Maximum likelihood estimation of stationary univariate fractionally ...
    To estimate the parameters of a stationary univariate fractionally integrated time series, the unconditional exact likelihood function is derived.
  29. [29]
    [PDF] Robust Testing for Fractional Integration using the Bootstrap¤
    This paper employs the bootstrap technique on fractional integration tests. In order to handle non-normal or conditionally heteroskedastic data, we re…ne.
  30. [30]
    Testing for a Unit Root in Time Series with Pretest Data-Based ... - jstor
    In this article we examine the impact of data-based lag-length estimation on the behavior of the augmented Dickey-Fuller (ADF) test for a unit root.
  31. [31]
    [PDF] CO-INTEGRATION AND ERROR CORRECTION ...
    The relationship between error correction models and co-integration was first pointed out in Granger (1981). A theorem showing precisely that co-integrated.
  32. [32]
    [PDF] Forecasting ARMA Models
    Integrated Forecasts. Forecasts of stationary ARMA processes damp down to mean, with widening prediction intervals. Integrated forecasts. After differencing ...
  33. [33]
    [PDF] An empirical analysis of the Phillips Curve - DiVA portal
    Using the ADF Tests, inflation and unemployment were recorded as integrated in the first order and reported upon further examination of cointegration. The paper.
  34. [34]
    [PDF] nelson-plosser-1982.pdf
    This paper investigates whether macroeconomic time series are better characterized as stationary fluctuations around a deterministic t&read or as ...
  35. [35]
    [PDF] The Efficient Market Hypothesis and its Critics - Princeton University
    The efficient market hypothesis is associated with the idea of a “random walk,” which is a term loosely used in the finance literature to characterize a price ...
  36. [36]
    [PDF] Random Walks in Stock- Market Prices - Chicago Booth
    In a random-walk-efficient market at any point in time the market price of a security will already reflect the judgments of many analysts concerning the ...
  37. [37]
    Trends and Random Walks in Macroeconomic Time Series - jstor
    In their 1982 article, Nelson and Plosser provided evidence supporting the existence of an autoregressive unit root in a variety of macroeconomic time.
  38. [38]
    Long memory and FIGARCH models for daily and high frequency ...
    Aug 10, 2025 · The volatility of daily futures returns for six important commodities are found to be well described as FIGARCH, fractionally integrated ...
  39. [39]
    Applying multivariate-fractionally integrated volatility analysis on ...
    Dec 3, 2020 · This study examines risk of emerging market bonds using fractional models, focusing on long memory properties and applying multivariate ...
  40. [40]
    Cointegration-based pairs trading: identifying and exploiting similar ...
    Aug 6, 2025 · The cointegration method, in contrast to the distance method, identifies pairs whose price ratios exhibit mean-reverting behavior over time.
  41. [41]
    Further Long Memory Properties of Inflationary Shocks - jstor
    There is evidence that a model with 0.5 < d < 1 can still be efficiently estimated by QMLE or alternatively estimated on the differenced series (see Smith, ...
  42. [42]
    [PDF] Long memory and fractional integration in high frequency data on ...
    This paper analyses the long-memory properties of a high-frequency financial time series dataset. It focuses on temporal aggregation and other features of ...
  43. [43]
    The Bitcoin price and Bitcoin price uncertainty: Evidence of Bitcoin ...
    Feb 1, 2024 · All unit root test findings indicated that it was stationary at a 1% significance level. ... Unit root test results for daily data. Variable, ADF ...
  44. [44]
    Stylized Facts of High-Frequency Bitcoin Time Series - arXiv
    Jun 22, 2025 · This paper analyses the high-frequency intraday Bitcoin dataset from 2019 to 2022. During this time frame, the Bitcoin market index exhibited ...Missing: 2020s | Show results with:2020s