Fact-checked by Grok 2 weeks ago

Autoregressive model

An autoregressive () model is a statistical representation of a random process in which each observation in a is expressed as a of one or more previous observations from the same series, plus a stochastic error term, making it a fundamental tool for modeling dependencies in sequential data. The general form of an AR model of order p, denoted AR(p), is given by the equation
y_t = c + \sum_{i=1}^p \phi_i y_{t-i} + \epsilon_t,
where y_t is the value at time t, c is a constant, \phi_i are the model parameters (autoregressive coefficients), and \epsilon_t is error with mean zero and constant variance. For the simplest case, an AR(1) model, this reduces to y_t = c + \phi_1 y_{t-1} + \epsilon_t, assuming stationarity when |\phi_1| < 1.
The concept originated in the early 20th century, with George Udny Yule introducing the first AR(2) model in 1927 to investigate periodicities in sunspot data, addressing limitations of purely deterministic cycle models by incorporating random disturbances. This work was extended by Gilbert Thomas Walker in 1931, who generalized the approach to higher-order autoregressions and derived methods for parameter estimation, laying the groundwork for the Yule-Walker equations that solve for the coefficients using autocorrelations. Autoregressive models are central to time series analysis, particularly in econometrics and finance, where they forecast variables like stock prices or economic indicators by capturing serial correlation; for instance, AR models have been applied to predict Google stock returns based on lagged values. In signal processing, they model stationary processes for tasks like speech analysis or noise reduction, assuming the data-generating mechanism follows a linear recursive structure. In contemporary machine learning, autoregressive principles underpin generative models for sequences, such as those in natural language processing (e.g., predicting the next word conditioned on prior context) and computer vision (e.g., generating images pixel by pixel), enabling scalable density estimation through the chain rule of probability. These extensions, often implemented with neural networks like recurrent or transformer architectures, have revolutionized applications in large language models while inheriting the core idea of sequential dependency modeling.

Fundamentals

Definition

An autoregressive (AR) model is a stochastic process in which each observation is expressed as a linear combination of previous observations of the same process plus a random error term. These models are fundamental in time series analysis for capturing temporal dependencies, where the value at time t relies on prior values rather than assuming observations are independent. In contrast to models treating data points as unrelated, AR models leverage the inherent autocorrelation in sequential data, such as economic indicators or natural phenomena, to represent persistence or momentum. The general form of an AR model of order p, denoted AR(p), is given by X_t = c + \sum_{i=1}^p \phi_i X_{t-i} + \varepsilon_t, where c is a constant, \phi_i are the model parameters (autoregressive coefficients), and \varepsilon_t is white noise—a sequence of independent and identically distributed random variables with mean zero and constant variance. The order p indicates the number of lagged terms included, allowing the model to account for dependencies extending back p periods. AR models differ from moving average (MA) models, which express the current value as a linear combination of past forecast errors rather than past values of the series itself. The term "autoregressive" derives from the idea of performing a regression of the variable against its own lagged values, emphasizing self-dependence within the time series. This framework assumes stationarity for reliable inference, though extensions like ARIMA incorporate differencing for non-stationary data.

Historical Development

The origins of autoregressive models trace back to the work of British statistician George Udny Yule, who in 1927 introduced autoregressive schemes to analyze periodicities in disturbed time series, particularly applying them to Wolfer's sunspot numbers to model cycles in astronomical data. Yule's approach represented a departure from traditional periodogram methods, emphasizing stochastic processes where current values depend on past observations to capture quasi-periodic behaviors in time series. In 1931, Gilbert Thomas Walker extended Yule's framework by generalizing it to higher-order autoregressive models, allowing for more flexible representations of complex dependencies in related time series. In the 1930s and 1940s, Herman Wold further advanced the theory by developing the Wold decomposition, showing that stationary processes can be represented as infinite-order AR or MA models, paving the way for ARMA frameworks. A major milestone came in 1970 with George E. P. Box and Gwilym M. Jenkins, who incorporated autoregressive models into the ARMA framework in their seminal book, providing a systematic methodology for identification, estimation, and forecasting that popularized AR models across forecasting applications in statistics and beyond. Since the 1980s, autoregressive models have seen modern extensions in signal processing for spectral estimation and analysis of stationary signals, as well as in machine learning through autoregressive neural networks that leverage past outputs for sequence generation tasks.

Model Formulation

General AR(p) Equation

The autoregressive model of order p, commonly denoted as AR(p), specifies that the value of a time series at time t, X_t, depends linearly on its previous p values plus a constant term and a stochastic error. The general form of the model is given by X_t - \sum_{i=1}^p \phi_i X_{t-i} = c + \varepsilon_t, where \phi_1, \dots, \phi_p are the autoregressive parameters, c is a constant representing the deterministic component (often related to the mean of the process), and \varepsilon_t is a white noise error term. This equation can be rearranged as X_t = c + \sum_{i=1}^p \phi_i X_{t-i} + \varepsilon_t. The error term \varepsilon_t is assumed to have mean zero, constant variance \sigma^2 > 0, and to be uncorrelated across time, i.e., \mathbb{E}[\varepsilon_t] = 0, \mathbb{E}[\varepsilon_t \varepsilon_s] = 0 for t \neq s, and \mathrm{Var}(\varepsilon_t) = \sigma^2. For statistical inference, such as maximum likelihood estimation, the errors are often further assumed to be independent and identically distributed as Gaussian, \varepsilon_t \sim \mathcal{N}(0, \sigma^2). When c = 0, the model is homogeneous, implying a zero- process, which is suitable for centered data. In the inhomogeneous case with c \neq 0, the constant accounts for a non-zero , and under weak stationarity, the unconditional of the process is \mu = c / (1 - \sum_{i=1}^p \phi_i). The model assumes weak stationarity, meaning the , variance, and autocovariances are time-invariant, which requires the roots of the to lie outside the unit circle (as detailed in the stationarity conditions section). For inference involving normality assumptions, Gaussian errors facilitate exact likelihood computations. A compact notation for the AR(p) model employs the backshift B, defined such that B X_t = X_{t-1} and B^k X_t = X_{t-k} for k \geq 1. The autoregressive is \phi(B) = 1 - \sum_{i=1}^p \phi_i B^i, leading to the form \phi(B) X_t = c + \varepsilon_t. This notation simplifies manipulations, such as differencing or combining with moving average components in broader ARMA models. For AR(p) processes, the model admits an infinite (MA(\infty)) representation, expressing X_t as an infinite of current and past errors plus the : X_t = \mu + \sum_{j=0}^\infty \psi_j \varepsilon_{t-j}, where the coefficients \psi_j are determined by the autoregressive parameters and satisfy \psi_0 = 1 with \sum_{j=0}^\infty |\psi_j| < \infty to ensure absolute summability. This representation underscores the process's dependence on the entire error history, providing a foundation for forecasting and spectral analysis.

Stationarity Conditions

In time series analysis, weak stationarity, also known as covariance stationarity, requires that a process has a constant mean, constant variance, and autocovariances that depend solely on the time lag rather than the specific time points. For an autoregressive process of order p, denoted AR(p), this property ensures that the statistical characteristics remain invariant over time, facilitating reliable modeling and forecasting. The necessary and sufficient condition for an AR(p) process to be weakly is that all roots of the characteristic equation \phi(z) = 1 - \sum_{i=1}^p \phi_i z^i = 0 lie outside the unit circle in the complex plane, meaning their moduli satisfy |z| > 1. This condition guarantees the existence of a . For the AR(1) process y_t = \phi y_{t-1} + \varepsilon_t, stationarity holds |\phi| < 1. If the stationarity condition is violated, such as when one or more roots have modulus |z| \leq 1, the AR process becomes non-stationary, exhibiting behaviors like unit root processes (e.g., random walks with time-dependent variance) or explosive dynamics where variance grows without bound. In the case of a unit root (|z| = 1), as in an AR(1) with \phi = 1, the process integrates to form a non-stationary series with persistent shocks. To address non-stationarity in AR processes, differencing transforms the series into a one by applying the operator \nabla y_t = y_t - y_{t-1}, which removes trends or unit roots; higher-order differencing (\nabla^d) may be needed for processes integrated of order d > 1. This approach underpins ARIMA models, where the differenced series follows a ARMA .

Properties and Analysis

Characteristic Polynomial

The characteristic polynomial of an autoregressive model of order p, denoted \phi(z) = 1 - \phi_1 z - \phi_2 z^2 - \cdots - \phi_p z^p, arises from the AR(p) operator \Phi(B) = 1 - \phi_1 B - \phi_2 B^2 - \cdots - \phi_p B^p, where B is the backshift operator such that B X_t = X_{t-1}. This polynomial encapsulates the linear dependence structure of the process defined by X_t = \sum_{j=1}^p \phi_j X_{t-j} + \epsilon_t, with \epsilon_t as white noise. To derive the characteristic polynomial, consider the AR(p) equation in operator form: \Phi(B) X_t = \epsilon_t. Substituting the lag operator B with a complex variable z yields the polynomial \phi(z), which can be viewed through the lens of the of the process. The approach transforms the difference equation into an algebraic one, where \phi(z) serves as the denominator in the $1/\phi(z), enabling the representation of the AR process as an infinite moving average via partial fraction expansion of the roots. Alternatively, s can be used to express the moments of the process, with the emerging from the denominator of the for the autocovariances. The roots of \phi(z) = 0 provide key insights into the dynamics of the AR process. If the roots are complex conjugates, they introduce oscillatory components in the time series behavior, with the argument of the roots determining the frequency of oscillation. The modulus of the roots governs persistence: roots with smaller modulus (closer to but outside the unit circle) imply slower decay of shocks and longer-lasting effects, while larger moduli (farther from the unit circle) indicate faster decay. For stationarity, all roots must lie outside the unit circle in the complex plane, a condition that ensures the infinite MA representation converges. Pure AR models are always invertible. Stationary AR models can be expressed as a convergent infinite moving average (MA(∞)) representation without additional constraints beyond the root locations. Graphically, the roots are plotted in the , where the unit circle serves as a boundary: points inside indicate non-stationarity, while those outside confirm it, visually highlighting oscillatory patterns via the imaginary axis and persistence via radial distance.

Intertemporal Effects of Shocks

In an autoregressive (AR) model, a is conceptualized as a one-time \varepsilon_t to the error term, representing an unanticipated disturbance at time t. This influences the future values of the X_{t+k} for k > 0 through the model's recursive structure. The marginal effect of such a is given by \frac{\partial X_{t+k}}{\partial \varepsilon_t} = \phi_k, where \phi_k denotes the k-th dynamic multiplier, obtained by recursively applying the AR coefficients (for an AR(p) model, \phi_0 = 1, \phi_k = \sum_{i=1}^{\min(k,p)} \phi_i \phi_{k-i} for k > 0). The persistence of these intertemporal effects depends on the stationarity of the AR process. In a stationary AR model, where all roots of the characteristic polynomial lie outside the unit circle, the effects of a shock decay geometrically over time, ensuring that the influence diminishes as k increases (e.g., in an AR(1) process with coefficient \phi_1 = \phi < 1, the effect on X_{t+k} is \phi^k \varepsilon_t). Conversely, in non-stationary cases, such as when a unit root is present (e.g., \phi = 1 in AR(1)), the effects accumulate rather than decay, leading to permanent shifts in the level of the series. A key aspect of shock propagation is the variance decomposition, which quantifies how past shocks contribute to the current unconditional variance of the process. For a stationary AR model, the variance \operatorname{Var}(X_t) = \sigma_\varepsilon^2 \sum_{k=0}^\infty \phi_k^2, where each term \phi_k^2 \sigma_\varepsilon^2 represents the contribution from a shock k periods in the past; this infinite sum converges due to geometric decay. In the AR(1) case, it simplifies to \operatorname{Var}(X_t) = \frac{\sigma_\varepsilon^2}{1 - \phi^2}, illustrating how earlier shocks have exponentially smaller contributions relative to recent ones. In econometric applications, particularly in macroeconomics, these shocks are often interpreted as exogenous events such as policy changes, supply disruptions, or demand fluctuations that propagate through economic variables modeled via AR processes. For instance, an unanticipated monetary policy tightening can be viewed as a negative shock whose intertemporal effects trace the subsequent adjustments in output or inflation, with persistence reflecting the economy's inertial response to such interventions.

Impulse Response Function

In autoregressive (AR) models, the impulse response function (IRF) quantifies the dynamic impact of a unit shock to the innovation term \varepsilon_t on the future values of the process X_{t+k}. It is formally defined as the sequence of coefficients \psi_k = \frac{\partial X_{t+k}}{\partial \varepsilon_t} for k = 0, 1, 2, \dots, with the initial condition \psi_0 = 1 reflecting the contemporaneous effect of the shock. These IRF coefficients arise from the moving average representation of the stationary AR process, X_t = \sum_{k=0}^\infty \psi_k \varepsilon_{t-k}, and satisfy a linear recurrence relation derived from the AR structure. For an AR(p) model, they are computed recursively as \psi_k = \sum_{i=1}^p \phi_i \psi_{k-i} for k > 0, with \psi_k = 0 for k < 0. This recursion allows efficient numerical calculation of the IRF sequence, starting from the known AR parameters \phi_1, \dots, \phi_p. For the simple AR(1) model, X_t = \phi X_{t-1} + \varepsilon_t, the IRF has a closed-form expression \psi_k = \phi^k for k \geq 0. Under the stationarity condition |\phi| < 1, this exhibits geometric , with the shock's influence diminishing exponentially over time. In practice, IRFs for AR(p) models with p > 1 are visualized through plots tracing \psi_k against k, revealing patterns such as monotonic , overshooting (where the response temporarily exceeds the long-run ), or oscillatory behavior influenced by in the model's . For instance, roots near the unit circle can prolong the shock's persistence, while purely real roots yield smoother responses. To account for estimation uncertainty, bands are constructed around estimated IRFs using methods like asymptotic , which relies on the variance-covariance of the AR parameters, or , which resamples residuals to simulate the of the responses. These bands widen with the forecast horizon k and are essential for on shock persistence.

Specific Examples

AR(1) Process

The AR(1) process is the first-order autoregressive model, capturing dependence of the current observation on only the immediate past value. It is expressed as X_t = c + \phi X_{t-1} + \varepsilon_t, where c denotes a constant term, \phi is the autoregressive coefficient satisfying |\phi| < 1 for stationarity, and \varepsilon_t is white noise with zero mean and finite variance \sigma^2 > 0. Under the stationarity |\phi| < 1, the unconditional mean of the process is \mu = \frac{c}{1 - \phi}. The unconditional variance is \gamma_0 = \frac{\sigma^2}{1 - \phi^2}. The autocorrelation function of the stationary AR(1) process exhibits exponential decay, given by \rho_k = \phi^{|k|} for lag k \geq 0. This geometric decline reflects the diminishing influence of past shocks over time, with the rate determined by |\phi|. An equivalent representation centers the process around its mean, yielding the mean-deviation form X_t - \mu = \phi (X_{t-1} - \mu) + \varepsilon_t. This formulation highlights the mean-reverting dynamics when |\phi| < 1, as deviations from \mu are scaled by \phi before adding new noise. Simulations of AR(1) sample paths reveal behavioral contrasts across \phi values. For \phi = 0.2, paths show rapid mean reversion, with quick damping of shocks and low persistence. At \phi = 0.9, paths display high persistence, wandering slowly before reverting, mimicking long-memory patterns. Negative \phi, such as \phi = -0.8, produces oscillatory paths alternating around the mean. In the unit root case where \phi = 1, the AR(1) process simplifies to a random walk, X_t = c + X_{t-1} + \varepsilon_t, which lacks stationarity as variance grows indefinitely with time.

AR(2) Process

The AR(2) process extends the autoregressive framework to second-order dependence, defined by the equation X_t = c + \phi_1 X_{t-1} + \phi_2 X_{t-2} + \varepsilon_t, where c is a constant, \phi_1 and \phi_2 are the autoregressive parameters, and \varepsilon_t is white noise with mean zero and finite variance \sigma^2. This formulation allows the current value X_t to depend linearly on the two preceding observations, capturing more complex temporal dynamics than the first-order case. Stationarity of the AR(2) process requires that the roots of the characteristic equation $1 - \phi_1 z - \phi_2 z^2 = 0 lie outside the unit circle. Equivalently, this condition holds if the parameters satisfy |\phi_2| < 1, \phi_1 < 1 - \phi_2, and \phi_1 > \phi_2 - 1, defining a triangular in the (\phi_1, \phi_2) parameter space. Under these constraints, the process has a time-invariant \mu = c / (1 - \phi_1 - \phi_2) and finite variance. The autocorrelation function (ACF) of a stationary AR(2) process decays gradually to zero, following the recursive relation \rho_k = \phi_1 \rho_{k-1} + \phi_2 \rho_{k-2} for k > 2, with initial values \rho_1 = \phi_1 / (1 - \phi_2) and \rho_2 = \phi_1 \rho_1 + \phi_2. If the characteristic roots are complex conjugates—which occurs when the discriminant \phi_1^2 + 4 \phi_2 < 0—the ACF exhibits damped sine wave oscillations, reflecting pseudo-periodic behavior. In contrast, real roots produce a monotonic exponential decay in the ACF. The partial autocorrelation function (PACF) for an AR(2) process truncates after lag 2, with \phi_{k,k} = 0 for all k > 2, providing a diagnostic signature for model identification. This sharp cutoff distinguishes AR(2) from higher-order processes, where the PACF would decay more slowly. The distinction between real and complex characteristic roots fundamentally shapes the process's dynamics: real roots yield smooth, non-oscillatory persistence, while complex roots introduce cyclic patterns with a pseudo-period determined by $2\pi / \cos^{-1}(\phi_1 / (2 \sqrt{|\phi_2|})). Simulated AR(2) series with complex roots, such as those satisfying the stationarity triangle and negative \phi_2, demonstrate this through visibly damped oscillatory trajectories, highlighting behaviors like stochastic cycles absent in lower-order models.

Estimation Methods

Choosing the Lag Order

Selecting the appropriate order p in an autoregressive AR(p) model is crucial for balancing model fit and , as an overly low p may underfit the data by omitting relevant dynamics, while a high p risks capturing rather than true structure. Methods for lag selection generally involve graphical tools, statistical criteria, testing procedures, and validation techniques, each providing complementary insights into the underlying serial dependence. One foundational approach relies on the autocorrelation function (ACF) and (PACF) of the . The ACF measures the linear between observations at different s, often decaying gradually for AR processes, while the PACF isolates the direct at k after removing effects of earlier s. For an AR(p) model, the theoretical PACF cuts off to zero after p, with significant sample PACF values (typically exceeding bounds of \pm 2/\sqrt{n}, where n is the sample size) at s 1 through p and insignificance beyond. Practitioners plot the PACF and identify the where spikes become negligible, suggesting that order as a candidate p. Information criteria offer a quantitative means to penalize model complexity while rewarding , commonly applied after fitting candidate AR models via . The (AIC) is defined as \text{AIC} = -2 \log L + 2k, where L is the maximized likelihood and k = p + 1 accounts for the intercept and p autoregressive coefficients; the order \hat{p} minimizing AIC is selected. Similarly, the (BIC), which imposes a stronger penalty on complexity, is \text{BIC} = -2 \log L + k \log n, with n the sample size, favoring more parsimonious models and often yielding lower \hat{p} than AIC, especially in finite samples. Both criteria are computed sequentially for increasing p until a minimum is reached, though BIC's property makes it preferable when the true order is of interest. Hypothesis testing provides a formal sequential framework for lag inclusion, starting from a baseline model and adding lags until evidence of significance wanes. Sequential t-tests assess the individual significance of the highest lag coefficient in an AR(p) model against zero, using standard errors from OLS estimation; if insignificant (e.g., at 5% level), reduce p by one and retest. Alternatively, F-tests evaluate the joint significance of all additional lags from AR(p-1) to AR(p), equivalent to testing restrictions on coefficients; rejection supports retaining the higher order. These "testing up" or "testing down" procedures guard against arbitrary choices but require assumptions like serially uncorrelated errors. Cross-validation evaluates candidate orders by their out-of-sample predictive performance, partitioning the into training and holdout sets while preserving temporal order to avoid lookahead bias. For AR models, one computes the mean absolute prediction error (MAPE) or root (RMSE) for forecasts on the holdout using models fitted to training data; the p minimizing this error is chosen. K-fold variants (e.g., 10-fold) are valid when residuals are uncorrelated, outperforming in-sample metrics, but fail with in underfit models—residual diagnostics like the Ljung-Box test are essential post-selection. A key concern with high lag orders is , where the model captures idiosyncratic noise in the sample, leading to inflated in-sample fit but poor and forecast inaccuracy; criteria and testing mitigate this by penalizing excess parameters, as higher p increases variance without proportional reduction. As a practical starting point, especially for annual with moderate sample sizes, one may initially consider lag orders up to 8-10 before applying formal selection, ensuring computational feasibility and alignment with typical lengths.

Yule-Walker Equations

The Yule-Walker equations offer a moment-based approach to estimate the coefficients of a autoregressive process of order p, denoted AR(p), by relating the model's parameters to its function. Introduced in the context of analyzing periodicities in time series, these equations stem from the foundational work on autoregressive representations. Consider the AR(p) model X_t - \sum_{i=1}^p \phi_i X_{t-i} = \epsilon_t, where \{\epsilon_t\} is with mean zero and variance \sigma^2 > 0. To derive the equations, multiply both sides by X_{t-k} for k \geq 1 and take expectations, yielding \gamma_k = \sum_{i=1}^p \phi_i \gamma_{k-i}, \quad k = 1, 2, \dots, p, where \gamma_k = \mathrm{Cov}(X_t, X_{t-k}) is the function, which satisfies \gamma_{-k} = \gamma_k and \gamma_0 = \mathrm{Var}(X_t) < \infty under stationarity. For k=0, the equation becomes \gamma_0 = \sum_{i=1}^p \phi_i \gamma_i + \sigma^2. These relations form a system of linear equations that links the AR coefficients \phi_i directly to the autocovariances. In matrix notation, the system for k = 1, \dots, p is expressed as \boldsymbol{\Gamma} \boldsymbol{\phi} = \boldsymbol{\gamma}, where \boldsymbol{\phi} = (\phi_1, \dots, \phi_p)^\top, \boldsymbol{\gamma} = (\gamma_1, \dots, \gamma_p)^\top, and \boldsymbol{\Gamma} is the p \times p symmetric \boldsymbol{\Gamma} = \begin{pmatrix} \gamma_0 & \gamma_1 & \cdots & \gamma_{p-1} \\ \gamma_1 & \gamma_0 & \cdots & \gamma_{p-2} \\ \vdots & \vdots & \ddots & \vdots \\ \gamma_{p-1} & \gamma_{p-2} & \cdots & \gamma_0 \end{pmatrix}. The positive definiteness of \boldsymbol{\Gamma} under the stationarity condition ensures a unique solution \boldsymbol{\phi} = \boldsymbol{\Gamma}^{-1} \boldsymbol{\gamma}. For estimation from a sample \{X_1, \dots, X_n\}, replace the population autocovariances \gamma_k with sample estimates \hat{\gamma}_k = \frac{1}{n} \sum_{t=1}^{n-|k|} (X_t - \bar{X})(X_{t+|k|} - \bar{X}), \quad k = 0, 1, \dots, p, where \bar{X} = n^{-1} \sum_{t=1}^n X_t. Substituting into the matrix form gives the Yule-Walker estimator \hat{\boldsymbol{\phi}} = \hat{\boldsymbol{\Gamma}}^{-1} \hat{\boldsymbol{\gamma}}, from which the noise variance is estimated as \hat{\sigma}^2 = \hat{\gamma}_0 - \hat{\boldsymbol{\phi}}^\top \hat{\boldsymbol{\gamma}}. This method assumes the lag order p is known. Under the stationarity assumption, the Yule-Walker estimator is consistent, with \hat{\boldsymbol{\phi}} \to_p \boldsymbol{\phi} as n \to \infty, and asymptotically normal, satisfying \sqrt{n} (\hat{\boldsymbol{\phi}} - \boldsymbol{\phi}) \to_d \mathcal{N}(\mathbf{0}, \boldsymbol{\Sigma}), where \boldsymbol{\Sigma} = \boldsymbol{\Gamma}^{-1} \boldsymbol{\Lambda} \boldsymbol{\Gamma}^{-1} and \boldsymbol{\Lambda} is the autocovariance matrix of the process at lag zero. However, the estimator is biased in finite samples, particularly for small n, due to the nonlinearity in the sample autocovariances and the inversion of the estimated Toeplitz matrix.

Maximum Likelihood Estimation

Maximum likelihood estimation (MLE) seeks to estimate the parameters of an autoregressive (AR) model by maximizing the likelihood of observing the given time series data under the model assumptions. For AR models, this approach typically assumes that the innovations are independent and identically distributed as Gaussian white noise with mean zero and variance \sigma^2. The parameters to estimate include the autoregressive coefficients \boldsymbol{\phi} = (\phi_1, \dots, \phi_p)^\top and the innovation variance \sigma^2. Under the Gaussian assumption, the likelihood function for an AR(p) process observed as X_1, \dots, X_n is L(\boldsymbol{\phi}, \sigma^2) = \prod_{t=p+1}^n \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left( -\frac{(X_t - \mathbf{x}_t' \boldsymbol{\phi})^2}{2\sigma^2} \right), where \mathbf{x}_t = (X_{t-1}, \dots, X_{t-p})^\top. This expression conditions on the initial p observations and treats the process as a sequence of conditional normals starting from t = p+1. The conditional MLE, obtained by maximizing this likelihood (or equivalently, its logarithm), conditions on the first p observations as fixed values, ignoring their contribution to the joint density. This conditional approach is computationally straightforward and equivalent to ordinary least squares regression of X_t on the lagged values \mathbf{x}_t for t = p+1, \dots, n, yielding estimates \hat{\boldsymbol{\phi}} and \hat{\sigma}^2 = n^{-1} \sum_{t=p+1}^n (X_t - \mathbf{x}_t' \hat{\boldsymbol{\phi}})^2. The conditional MLE has a closed-form solution via the normal equations for any p \geq 1. In contrast, the unconditional MLE incorporates the full joint likelihood by accounting for the initial conditions through the stationary distribution of the process or via prediction errors (innovations). For a stationary Gaussian AR(p), the observations follow a multivariate normal distribution with mean zero and covariance matrix determined by the parameters, leading to an exact likelihood that includes the density of the first p values. This can be computed efficiently using the prediction error decomposition, where the log-likelihood is expressed as a sum of one-step-ahead forecast errors and their conditional variances, often implemented via the for higher-order models. The unconditional MLE is asymptotically more efficient than the conditional version, especially for short samples where initial observations matter, and typically requires numerical optimization methods such as , which update estimates using the score vector and observed information matrix derived from the log-likelihood. Compared to the Yule-Walker method, MLE offers greater statistical efficiency when the Gaussian assumption holds, as it fully utilizes the distributional information rather than relying solely on sample autocorrelations. The asymptotic covariance matrix of the MLE can be estimated from the inverse of the observed Hessian of the log-likelihood (negative second derivatives), providing standard errors for inference. Hypothesis tests, such as Wald tests for individual coefficients or likelihood ratio tests for comparing models of different orders, rely on these asymptotic normality properties under standard regularity conditions.

Spectral Characteristics

Power Spectral Density

The power spectral density (PSD) of a stationary autoregressive (AR) process provides a frequency-domain representation of its second-order properties, quantifying how the variance is distributed across different frequencies. For an AR(p) process defined by X_t = \sum_{j=1}^p \phi_j X_{t-j} + \epsilon_t, where \{\epsilon_t\} is white noise with variance \sigma^2 and the characteristic polynomial \phi(z) = 1 - \sum_{j=1}^p \phi_j z^j has roots outside the unit circle ensuring stationarity, the PSD is given by f(\omega) = \frac{\sigma^2}{2\pi} \left| \phi(e^{-i\omega}) \right|^{-2}, for frequencies \omega \in [-\pi, \pi]. This formula arises from the infinite moving average (MA) representation of the AR process, where the transfer function in the frequency domain inverts the AR polynomial. The PSD relates directly to the autocovariance function \{\gamma_k\} of the process via the inverse Fourier transform: \gamma_k = \int_{-\pi}^{\pi} f(\omega) e^{i k \omega} \, d\omega, with \gamma_0 = \int_{-\pi}^{\pi} f(\omega) \, d\omega representing the total variance. Conversely, the PSD is the Fourier transform of the autocovariance sequence, bridging time-domain dependence to cyclic components in the frequency domain. In interpretation, the PSD highlights dominant periodicities in the process: peaks at specific \omega indicate frequencies contributing most to the variance, such as cycles near zero frequency for processes with strong short-term dependence. For persistent AR models (e.g., roots of \phi(z) near the unit circle), the PSD concentrates power at low frequencies, reflecting long-memory-like behavior in the time domain. The sample periodogram, an estimator of the PSD formed from the discrete Fourier transform of the observed series, converges in probability to f(\omega) as the sample size increases, for fixed \omega away from 0 and \pi. This asymptotic property underpins nonparametric spectral estimation, though AR models offer parametric alternatives for smoother density approximations. The PSD of an AR process can be viewed as the inverse of the spectral density of its causal MA(\infty) representation X_t = \sum_{j=0}^\infty \psi_j \epsilon_{t-j}, where \psi(z) = 1 / \phi(z), yielding f(\omega) = (\sigma^2 / 2\pi) |\psi(e^{-i\omega})|^2. This "whitening" perspective underscores how AR filtering removes serial correlation, flattening the spectrum toward that of white noise.

Low-Order AR Spectra

The AR(0) model, equivalent to white noise, exhibits a flat power spectral density across all frequencies, given by f(\omega) = \frac{\sigma^2}{2\pi}, where \sigma^2 is the variance of the innovation process and \omega \in [-\pi, \pi]. This uniform spectrum reflects the absence of temporal dependence, with equal power distributed at every frequency, characteristic of uncorrelated noise. For the AR(1) model X_t = \phi X_{t-1} + \epsilon_t with |\phi| < 1, the power spectral density is f(\omega) = \frac{\sigma^2 / 2\pi}{1 + \phi^2 - 2\phi \cos \omega}, derived from the general form f(\omega) = \frac{\sigma^2 / 2\pi}{|1 - \phi e^{-i\omega}|^2}. When |\phi| is small (e.g., near 0), the spectrum is relatively flat, spreading power evenly similar to white noise but with slight modulation. As |\phi| approaches 1, power concentrates sharply at low frequencies (\omega \approx 0), indicating strong persistence and low-frequency dominance in the process. The AR(2) model X_t = \phi_1 X_{t-1} + \phi_2 X_{t-2} + \epsilon_t, stationary for roots of $1 - \phi_1 z - \phi_2 z^2 = 0 outside the unit circle, has power spectral density f(\omega) = \frac{\sigma^2 / 2\pi}{|1 - \phi_1 e^{-i\omega} - \phi_2 e^{-i 2\omega}|^2}. Complex conjugate roots produce spectral peaks at frequencies corresponding to the argument of the roots, reflecting oscillatory behavior with a dominant cycle length. For real roots, the spectrum may show broader concentration without distinct peaks. Frequency-domain plots of these low-order AR spectra illustrate parameter effects: AR(1) traces transition from near-flat (low \phi) to sharply peaked at zero frequency (high \phi); AR(2) plots reveal single or bimodal peaks for varying \phi_1, \phi_2, such as concentration around \omega \approx 1.35 (cycle of about 4-5 units) for \phi_1 = 0.4, \phi_2 = -0.8. These visualizations highlight how AR parameters shape power distribution, aiding model diagnostics. In model identification, AR spectra typically feature sharp peaks from AR poles, contrasting with smoother MA spectra that exhibit dips from zeros, facilitating distinction between AR and MA processes via observed frequency patterns.

Forecasting and Applications

n-Step-Ahead Predictions

In autoregressive (AR) models, the one-step-ahead forecast for the next observation X_{t+1} is obtained by substituting the known past values into the model equation, yielding \hat{X}_{t+1} = c + \sum_{i=1}^p \phi_i X_{t+1-i}, where c is the constant term and \phi_i are the AR coefficients. This point forecast represents the conditional expectation of X_{t+1} given the observed data up to time t. For multi-step-ahead forecasts (h > 1), the recursive method is employed, where \hat{X}_{t+h} = c + \sum_{i=1}^p \phi_i \hat{X}_{t+h-i}, iteratively using previously computed forecasts in place of unavailable future observations. This approach accumulates as the forecast horizon h increases, with forecast errors typically growing due to the of prediction errors. In the special case of an AR(1) model, a simplifies the : \hat{X}_{t+h} = \mu + \phi^h (X_t - \mu), where \mu = c / (1 - \phi) is the process mean. Prediction intervals account for this uncertainty by incorporating the forecast variance. A (1 - \alpha) \times 100\% interval is given by \hat{X}_{t+h} \pm z_{\alpha/2} \hat{\sigma}_{t+h}, where z_{\alpha/2} is the from the and \hat{\sigma}_{t+h}^2 = \sigma^2 \left(1 + \sum_{j=1}^{h-1} \psi_j^2 \right) is the estimated variance, with \sigma^2 the innovation variance and \psi_j the coefficients from the infinite representation of the AR process. For AR processes (where all roots of the lie outside the unit circle), forecasts converge to the unconditional mean \mu as h \to \infty, reflecting the mean-reverting behavior of the series.

Practical Implementations

Autoregressive (AR) models are commonly implemented in statistical software for time series analysis, enabling practitioners to estimate parameters, generate forecasts, and visualize model dynamics efficiently. In the R programming language, the ar() function from the base stats package provides tools for fitting AR models of specified order using either the Yule-Walker equations or maximum likelihood estimation (MLE). For forecasting applications, the forecast package extends this functionality by integrating AR models with prediction intervals and automated order selection via functions like auto.arima(), which can fit pure AR processes as a special case. These implementations are widely used in econometric and financial time series workflows due to R's robust ecosystem for statistical computing. Python offers accessible AR model fitting through the statsmodels library, where the AutoReg class in statsmodels.tsa.ar_model handles estimation for AR(p) processes, supporting both conditional least squares and MLE approaches. Data preparation, such as handling time-indexed series and differencing for stationarity, is typically done using pandas, which provides DataFrame methods like asfreq() and interpolate() for aligning and filling timestamps. This combination makes Python suitable for integrating AR models into machine learning pipelines, such as those in scikit-learn extensions or custom neural network hybrids. MATLAB's Toolbox includes the estimate method with the ar model object for fitting univariate AR models, estimating coefficients via or MLE. For more general linear systems, the armax function in the Toolbox allows specification of AR components within ARMA or ARMAX frameworks, facilitating analysis and simulation. These tools are particularly valued in and contexts for their built-in support for multivariate extensions and graphical diagnostics. In , AR models can be fitted using packages like TimeModels.jl, which supports models (with differencing order 0 and moving average order 0 for pure AR), or by constructing lagged regressors and using the StatsModels.jl package for ordinary least squares estimation. Julia's just-in-time compilation enables high-performance computations, making it ideal for large-scale simulations. Practical examples illustrate these implementations. For AR(1) estimation in , the following code snippet fits a model to a simulated series:
python
import numpy as np
import pandas as pd
from statsmodels.tsa.ar_model import AutoReg

# Simulated AR(1) data: y_t = 0.5 * y_{t-1} + epsilon
np.random.seed(42)
n = 100
y = np.zeros(n)
y[0] = np.random.normal()
for t in range(1, n):
    y[t] = 0.5 * y[t-1] + np.random.normal()
data = pd.Series(y)

# Fit AR(1)
model = AutoReg(data, lags=1)
results = model.fit()
print(results.summary())  # Displays coefficients, e.g., phi_1 ≈ 0.5
This yields parameter estimates close to the true value, with standard errors for inference. For impulse response function (IRF) plotting in , which visualizes the dynamic response to a in an AR model, consider this example using the vars package for a fitted AR(1):
r
library(vars)

# Simulated AR(1) data as above (adapt to R: set.seed(42); y <- arima.sim(n=100, list(ar=0.5), innov=rnorm))
data <- ts(y)

# Fit AR(1) using VAR for compatibility with irf
var_fit <- VAR(data, p = 1)

# IRF (response to unit [shock](/page/Shock))
irf_obj <- irf(var_fit, impulse = "y", response = "y", n.ahead = 10)
plot(irf_obj)  # Plots decaying response: phi^t for t=1 to 10
The IRF decays geometrically at rate φ, confirming model stability if |φ| < 1. Best practices for AR model implementation emphasize pre-testing for stationarity using the to ensure the series is integrated of order zero, as non-stationary data can lead to spurious regressions. In R, this is implemented via adf.test() in the tseries package; in Python, adfuller() from statsmodels.tsa.stattools. For handling , common approaches include or forward/backward filling in pandas (data.interpolate(method='linear')) or R's na.approx() from the zoo package, followed by model refitting to avoid bias in coefficient estimates. These steps ensure reliable AR fitting, with diagnostics like residual checks (e.g., Ljung-Box test) verifying model adequacy post-estimation.

References

  1. [1]
    T.2.1 - Autoregressive Models | STAT 501
    An autoregressive model is when a value from a time series is regressed on previous values from that same time series.
  2. [2]
    VII. On a method of investigating periodicities disturbed series, with ...
    On a method of investigating periodicities disturbed series, with special reference to Wolfer's sunspot numbers. George Udny Yule ... autoregressive model, ...
  3. [3]
    On periodicity in series of related terms - Journals
    On periodicity in series of related terms. Gilbert Thomas Walker. Google ... Published:03 June 1931https://doi.org/10.1098/rspa.1931.0069. Abstract. An ...
  4. [4]
    What Are Autoregressive Models? How They Work and Example
    Autoregressive models are statistical models that use past data to predict future values, such as by forecasting stock prices based on historical prices.What Are Autoregressive... · How They Work · Example · Explain Like I'm Five
  5. [5]
    Deep Generative Modelling: A Comparative Review of VAEs, GANs ...
    Mar 8, 2021 · This compendium covers energy-based models, variational autoencoders, generative adversarial networks, autoregressive models, normalizing flows, in addition to ...
  6. [6]
    8.3 Autoregressive models | Forecasting: Principles and ... - OTexts
    An autoregressive model forecasts a variable using past values of the variable itself, like a regression against itself. An AR(p) model uses lagged values as ...
  7. [7]
    Autoregressive Model - an overview | ScienceDirect Topics
    An autoregressive model expresses a variable's current value as a linear combination of its previous values and a stochastic error term.
  8. [8]
    What is an autoregressive model | IBM
    An autoregressive model is when we regress a value from a time series on previous values from that same time series. For example, yt regressed on yt-1 uses the ...
  9. [9]
    [PDF] Chapter 3. Stationarity, white noise, and some basic time series ...
    The order p autoregressive model, abbreviated to AR(p), is [M1] Yn = φ1Yn-1 + φ2Yn-2 + + φpYn-p + n, where {n} is a white noise process. Often, we consider ...
  10. [10]
  11. [11]
    What are ARIMA Models? | IBM
    Autoregressive modeling and Moving Average modeling are two different approaches to forecasting time series data. ARIMA integrates these two approaches ...Missing: source | Show results with:source
  12. [12]
    [PDF] Lecture 6: Autoregressive Integrated Moving Average Models
    An AR model, a foundational part of ARIMA models, is defined as xt = φjxt−j + wt, where wt is white noise.
  13. [13]
    [PDF] Lecture 13 Time Series: Stationarity, AR(p) & MA(q)
    We call a process like this a white noise (WN) process. • We denote a WN process as ε ~ WN(0, 𝜎 ). • White noise is the basic building block of all time series.
  14. [14]
    [PDF] Box-Jenkins Models - UniSA
    6 Autoregressive Processes. The general form of an autoregressive process of order p, or AR(p), is given by. Xt = α1Xt−1 +α2Xt−2 +...+αpXt−p +Zt. (16). Page ...
  15. [15]
    [PDF] AR, MA and ARMA models • The autoregressive process of order p ...
    the AR(p) process is given by the equation. Φ(B)Xt = ωt;t = 1,...,n. • Φ(B) is known as the characteristic polynomial of the process and its roots determine ...Missing: standard | Show results with:standard
  16. [16]
    [PDF] Backshift Notation ARMA (and ARIMA) models are often expressed ...
    AR(p) Processes: For p ≥ 3, the roots of φ(B) = 0 can be found numerically. Note: If φ1 + φ2 + ··· + φp ≥ 1, the process is always non-stationary. (Why ...
  17. [17]
    [PDF] Chapter 4: Models for Stationary Time Series
    moving average models. ▷ We have seen that an AR process can be represented as an infinite-order MA process. ▷ Can an MA process be represented as an AR process ...
  18. [18]
    [PDF] Chapter 4 Models for Stationary Time Series
    ... AR process may also be thought of as an infinite-order moving average process. However, for some purposes the autoregressive representations are also convenient ...
  19. [19]
    [PDF] Chapter 4. Models for Stationary Time Series 4.3 Autoregressive ...
    Very useful and common in practice. Can fit effectively by least squares. Direct predictions. Checking general formulas. Clearly allows autocorrelation.
  20. [20]
    [PDF] Chapter 5: Models for Nonstationary Time Series
    ▷ If the ARIMA process has no autoregressive terms, it becomes an integrated moving average process, denoted IMA(d,q). ▷ If the ARIMA process has no ...
  21. [21]
  22. [22]
    [PDF] Autoregressive Processes - Duke Statistical Science
    Mar 3, 2016 · we see that Yt ∼ AR(p − 1) characteristic polynomial ˜P(z) = Q1≤n<p(1 − z/rn) of order (p − 1); many properties of Xt ∼ AR(p) can be ...
  23. [23]
  24. [24]
    [PDF] Section 8 Regression with Stationary Time Series
    ▫ Consider the effect of a one-time shock ε1 on the series y from time one on, assuming (for simplicity) that y0 = 0 and all subsequent ε values are also ...
  25. [25]
    [PDF] Macroeconomic Shocks and Their Propagation
    I begin with the problem of identifying shocks to fiscal policy in a simple model with no dynamics. I then generalize the model to a dynamic trivariate model.
  26. [26]
    [PDF] Confidence Bands for Impulse Responses - DIW Berlin
    Jan 9, 2014 · Impulse response analysis for structural vector autoregressive (VAR) models is considered as a specific area where the results are relevant.
  27. [27]
  28. [28]
    Time Series: Theory and Methods | SpringerLink
    This edition contains a large number of additions and corrections scattered throughout the text, including the incorporation of a new chapter on state-space ...Missing: AR( | Show results with:AR(
  29. [29]
    [PDF] AR, MA and ARMA models - Hedibert Freitas Lopes
    The PACF of a stationary time series is a function of its. ACF and is a useful tool for determining the order p of an. AR model. A simple, yet effective way to ...
  30. [30]
    [PDF] AR(2) Model: Consider yt = φ1yt−1 + φ2yt−2 + t. - Osaka-u
    The stationarity condition is: two solutions of x from φ(x) = 1−φ1x−φ2x2 = 0 are outside the unit circle. 2. Rewriting the AR(2) model,.Missing: roots | Show results with:roots
  31. [31]
    [PDF] 1 Stationarity Conditions for an AR(2) Process 0 1)( = - - = z z zC
    When the roots are real, the larger root must be less than unity, and the smaller root must exceed negative unity, if they are both to lie inside the unit ...Missing: φ1 φ2
  32. [32]
    Time Series Analysis - ARIMA models - AR(2) process
    When these solutions, in absolute value, are smaller than 1, the AR(2) model is stationary. Later, it will be shown that these conditions are satisfied if f1 ...
  33. [33]
    Unit 12 AR(2) models | Time Series Midterm Review - Bookdown
    Stationary AR(2)'s with complex conjugate roots have an autocorrelation function which looks more like a damped sinusoid instead of a damped exponential.Missing: φ1 φ2
  34. [34]
    [PDF] Univariate Time Series Models
    ► Under stationarity this variance must be a constant, positive number. These are the stationarity conditions for the AR(2) process. ρk = θ1ρk−1 + θ2ρk−2, k = ...<|control11|><|separator|>
  35. [35]
    14.6 Lag Length Selection using Information Criteria
    The selection of lag lengths in AR and ADL models can sometimes be guided by economic theory. However, there are statistical methods that are helpful.
  36. [36]
    Time Series Regression IX: Lag Order Selection - MathWorks
    The first is to start with a small model, then test additional lags until their individual significance, or the joint significance of the entire lag structure, ...
  37. [37]
    [PDF] A Note on the Validity of Cross-Validation for Evaluating ...
    Jul 23, 2017 · K-fold cross-validation is possible for autoregressive models if errors are uncorrelated, which occurs when models are large and flexible.
  38. [38]
    [PDF] Lag Length Selection in Vector Autoregressive Models: Symmetric ...
    For symmetric lag models, AIC did best, selecting the common lag more frequently than KAIC. For asymmetric lag lag models, AIC selected the shortest lag in the ...
  39. [39]
    [PDF] ARIMA models - Duke People
    Rough rules of thumb: – If the stationarized series has positive autocorrelation at lag 1, AR terms often work best. If it has negative autocorrelation at lag ...
  40. [40]
    [PDF] Spectral Analysis of Stochastic Processes
    As a linear station- ary stochastic process, an ARMA[p, q] process is completely defined by its autocovariance function cov(τ) or, equivalently, in terms of the ...
  41. [41]
    [PDF] Forecasting
    The model will typically lose accuracy the longer into the future we try to predict and so, in general, the shorter the time frame, the more accurate the ...
  42. [42]
    [PDF] Chapter 9: Forecasting
    ▷ For longer lead times (i.e., ` > q) the noise terms disappear and only the autoregressive component (and the constant term) of the model affects the forecast.