Forecast error
Forecast error is the difference between an observed value and the corresponding value forecasted by a model, representing the unpredictable component of the data rather than any mistake in the prediction process.[1] In time series analysis, it is typically denoted as e_t = y_t - \hat{y}_{t|t-1}, where y_t is the actual value at time t and \hat{y}_{t|t-1} is the forecast made at time t-1 for time t.[1] This metric is fundamental for assessing the performance of forecasting models across fields such as economics, meteorology, and supply chain management, enabling practitioners to quantify inaccuracies and refine predictive methods.[2] Common measures of forecast error aggregate these individual differences to evaluate overall accuracy, including scale-dependent metrics like mean absolute error (MAE), defined as the average of absolute errors \text{MAE} = \frac{1}{n} \sum |e_t|, and root mean squared error (RMSE), which penalizes larger errors more heavily via \text{RMSE} = \sqrt{\frac{1}{n} \sum e_t^2}.[1] Scale-independent alternatives, such as mean absolute scaled error (MASE), normalize errors relative to a naive benchmark to facilitate comparisons across datasets with varying scales.[1] These evaluations are performed on out-of-sample data to ensure the model's generalization beyond training periods, highlighting the importance of residual analysis for detecting issues like autocorrelation or heteroscedasticity in errors.[3] While no forecasting method eliminates error entirely due to inherent stochasticity in real-world processes, minimizing forecast error through techniques like ARIMA modeling or machine learning has driven advancements in predictive reliability.[2]Definition and Fundamentals
Definition
Forecast error is the difference between an observed value and the corresponding value predicted by a forecasting model.[3] In time series forecasting, it quantifies the deviation between realized outcomes and ex-ante predictions, capturing the inherently unpredictable elements of the data-generating process rather than flaws in model specification alone.[1] This measure is central to assessing predictive performance across fields such as economics, meteorology, and supply chain management.[4] For a one-step-ahead forecast at time t, the error is formally expressed as et = yt − ŷt|t−1, where yt denotes the actual observation and ŷt|t−1 the forecast generated using information available up to time t−1.[3] Positive errors indicate under-forecasting (actual exceeds prediction), while negative errors signify over-forecasting.[5] For multi-step horizons, the formulation generalizes to et+h = yt+h − ŷt+h|t, with h > 1, where longer horizons typically amplify error magnitudes due to accumulating uncertainty.[1] Individual forecast errors serve as building blocks for aggregate accuracy metrics, but their analysis reveals patterns like bias (systematic over- or under-prediction) or variance in unpredictability.[2] In statistical terms, under ideal conditions of correct model specification and no structural breaks, forecast errors should resemble white noise—uncorrelated, zero-mean, and constant variance—validating the model's adequacy.[3] Deviations from this inform iterative refinements.[6]
Distinction from Related Concepts
Forecast error, defined as the difference between an observed value and its forecast—typically e(t) = y(t) - \hat{y}(t|t-1) in time series contexts where the forecast \hat{y}(t|t-1) relies solely on information available up to time t-1—is distinct from residual errors, which pertain to in-sample fitted values using the full dataset including the current observation.[3] Residuals measure model fit on historical data, such as y(t) - \hat{y}(t), and are used for parameter estimation and diagnostics, whereas forecast errors evaluate out-of-sample predictive performance, emphasizing the model's ability to anticipate unseen future values.[3] This out-of-sample focus makes forecast errors more indicative of real-world forecasting reliability, as in-sample residuals can overestimate accuracy due to data leakage from using contemporaneous information.[3] While often used interchangeably with prediction error, forecast error specifically highlights errors in prospective time series projections, where predictions are conditioned on past data only, contrasting with broader prediction errors that may include in-sample or non-temporal estimates.[7] For instance, in machine learning, prediction error encompasses both training set residuals and test set forecasts, but forecast error isolates the temporal dependency and multi-step horizons inherent to sequential data, such as y(t+h) - \hat{y}(t+h|t) for lead time h > 1.[3] This distinction is critical in domains like economics or meteorology, where forecasts must account for evolving uncertainties absent in static predictions.[8] Forecast error also differs from estimation error, which quantifies inaccuracies in inferring model parameters (e.g., \hat{\theta} - \theta) from observed data, rather than in generating value predictions.[9] Estimation errors arise during model calibration and affect parameter stability, but they do not directly measure predictive deviation; instead, they propagate into forecast errors through suboptimal parameter choices.[10] In contrast, forecast error captures the end-to-end discrepancy between anticipated and realized outcomes, independent of whether parameters are precisely estimated, as even well-estimated models can produce large forecast errors due to structural misspecification or unforeseen shocks.[11] Bias and variance, while components decomposing the expected squared forecast error under the bias-variance tradeoff, are not synonymous with the raw forecast error itself.[12] Bias represents the systematic over- or under-prediction (e.g., \mathbb{E}[e(t)] \neq 0), reflecting model assumptions that fail to capture true data-generating processes, whereas variance measures the sensitivity of forecasts to training data fluctuations, leading to inconsistent errors across realizations.[13] The total expected error combines these with irreducible noise, but individual forecast errors e(t) can be unbiased yet highly variable, or vice versa, underscoring that forecast error is the observable outcome rather than its averaged or decomposed parts.[14]Measurement and Metrics
Common Error Metrics
Forecast error metrics evaluate the accuracy of point forecasts by aggregating residuals e_t = y_t - \hat{y}_{t|t-1} over a hold-out test set, ensuring assessment on unseen data.[1] These measures are categorized as scale-dependent, which vary with data units and suit single-series evaluation, or scale-independent, enabling cross-series comparisons.[1] Scale-dependent metrics include mean error (ME), mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE), while scale-independent ones encompass mean absolute percentage error (MAPE) and mean absolute scaled error (MASE).[1][15] Mean error (ME) is computed as \mathrm{ME} = \frac{1}{h} \sum_{t=1}^{h} e_t, where h is the forecast horizon, providing a simple bias indicator; a value of zero implies unbiased forecasts on average.[1] Mean absolute error (MAE) equals \mathrm{MAE} = \frac{1}{h} \sum_{t=1}^{h} |e_t|, offering interpretability in the data's original units and robustness to outliers compared to squared variants.[1] Mean squared error (MSE) is \mathrm{MSE} = \frac{1}{h} \sum_{t=1}^{h} e_t^2, emphasizing larger deviations due to quadratic penalization.[1] Root mean squared error (RMSE), the square root of MSE or \mathrm{RMSE} = \sqrt{\frac{1}{h} \sum_{t=1}^{h} e_t^2}, retains the data's scale while amplifying the impact of substantial errors over MAE.[1] Scale-independent metrics address comparability limitations of scale-dependent ones. Mean absolute percentage error (MAPE) is \mathrm{MAPE} = \frac{1}{h} \sum_{t=1}^{h} 100 \frac{|e_t|}{|y_t|}, facilitating unit-free assessments but becoming undefined or extreme when actual values y_t are zero or near-zero, and exhibiting bias favoring under-forecasts.[1] Mean absolute scaled error (MASE) scales residuals by in-sample errors from a naive benchmark: \mathrm{MASE} = \frac{1}{h} \sum_{t=1}^{h} \frac{|e_t|}{\frac{1}{T-1} \sum_{j=2}^{T} |y_j - y_{j-1}|} for non-seasonal data (with T training observations), or using seasonal differences for periodic series, yielding values below 1 for superior performance relative to the benchmark.[1][15] MASE is favored for its robustness across scales, avoidance of division-by-zero issues, and empirical stability in comparisons, as demonstrated in analyses of datasets like Australian beer production where it reliably quantifies improvements over naive methods.[15]| Metric | Formula | Key Properties |
|---|---|---|
| ME | \frac{1}{h} \sum e_t | Scale-dependent; detects bias[1] |
| MAE | $\frac{1}{h} \sum | e_t |
| RMSE | \sqrt{\frac{1}{h} \sum e_t^2} | Scale-dependent; error-magnitude sensitive[1] |
| MAPE | $\frac{1}{h} \sum 100 \frac{ | e_t |
| MASE | $\frac{\frac{1}{h} \sum | e_t |
Properties and Limitations of Metrics
Forecast error metrics, such as mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE), possess properties that determine their appropriateness for evaluating predictive models across varying data characteristics. MAE quantifies the average absolute deviation between forecasted and actual values, remaining in the original units of the data for direct interpretability, but it is scale-dependent, preventing straightforward comparisons between series with different magnitudes.[1] RMSE, derived as the square root of the mean squared error, amplifies the impact of larger deviations through quadratic penalization, rendering it suitable for applications assuming Gaussian-distributed errors where minimizing variance is prioritized; however, this also heightens its scale dependence and sensitivity to outliers compared to MAE.[1][16] MAPE offers scale independence by expressing errors as percentages relative to actual values, facilitating comparisons across disparate datasets, yet it assumes non-zero actuals and can distort assessments in series with low or variable means.[1] A key limitation of scale-dependent metrics like MAE and RMSE is their inability to benchmark accuracy across datasets with differing units or scales without normalization, such as through scaled variants like mean absolute scaled error (MASE), which compares errors to a naive benchmark within the same series.[1] RMSE's emphasis on large errors can lead to over-optimization for outlier-heavy data, potentially misrepresenting overall performance in robust applications, whereas MAE promotes median-aligned forecasts, which may underperform in mean-focused scenarios under normality assumptions.[16] MAPE introduces asymmetry, disproportionately penalizing over-forecasts when actual values are small, and becomes undefined or infinite for zero actuals, rendering it unreliable for intermittent or sparse demand forecasting; empirical studies highlight its bias toward conservative low forecasts in such contexts.[1][17]| Metric | Key Properties | Primary Limitations |
|---|---|---|
| MAE | Scale-dependent; linear penalization of errors; optimizes for median forecasts | Cannot compare across scales; less sensitive to large errors, potentially overlooking severe deviations[1][16] |
| RMSE | Scale-dependent; quadratic penalization; scale-equivalent to data units; optimal for Gaussian errors | Heightened outlier sensitivity; scale incomparability; favors mean forecasts over medians[1][16] |
| MAPE | Scale-independent; percentage-based for intuitive communication | Undefined for zero actuals; asymmetric bias against over-forecasts in low-value series; inapplicable to non-positive data[1][17] |
Causes of Forecast Errors
Model and Methodological Causes
Model misspecification arises when the chosen forecasting model fails to capture the true data-generating process, introducing systematic biases into predictions. Common forms include omitted relevant variables, which propagate through as correlated errors; incorrect functional forms, such as assuming linearity in nonlinear dynamics; and violations of core assumptions like independence, homoscedasticity, or stationarity in time series data. These errors amplify forecast inaccuracy, as evidenced by biased coefficient estimates and inflated variance in predictions, particularly under omission of key predictors that correlate with included regressors.[18][19] In time series contexts, model misspecification often stems from inadequate representation of temporal dependencies, such as ignoring autocorrelation or structural breaks, leading to residuals that exhibit patterns rather than white noise. For instance, applying an autoregressive model of insufficient order to persistent data results in underestimation of future variance, while neglecting seasonality in periodic series produces recurring over- or under-predictions at specific lags. Empirical decompositions of forecast error variance reveal that misspecification can account for a substantial portion of total uncertainty, sometimes exceeding shocks from exogenous variables in macroeconomic models.[20][21] Methodological flaws compound these issues through errors in model selection and estimation procedures. Inappropriate optimization techniques, like maximum likelihood under non-Gaussian errors, yield inconsistent parameters and propagate into forecast horizons, increasing mean squared error. Similarly, reliance on in-sample fit without rigorous cross-validation fosters overfitting, where models memorize historical noise—reducing training errors but elevating out-of-sample deviations by 20-50% in simulated benchmarks. Small sample sizes exacerbate this, as estimators become unstable; for example, in vector autoregressions, short horizons bias impulse responses, distorting multi-step forecasts.[3][22] Validation shortcomings, such as neglecting uncertainty quantification or using non-robust error metrics, further mask methodological weaknesses. Diebold-Mariano tests applied post-hoc may detect superiority of alternatives, but initial methodological choices—like excluding robustness checks for parameter instability—perpetuate errors, as seen in cases where models stable in estimation periods falter amid regime shifts. Addressing these requires diagnostic tools like residual autocorrelation checks and encompassing tests to isolate misspecification from other error sources.[23][24]Data and External Causes
Poor data quality, including inaccuracies, incompleteness, duplicates, and inconsistencies, undermines forecast reliability by biasing model inputs and estimates. For example, missing or erroneous data entries propagate errors through time series models, leading to overstated or understated predictions, as demonstrated in analyses of economic datasets where preliminary data revisions alone account for significant inaccuracies.[11] Outdated or irrelevant data further exacerbates this by failing to reflect current dynamics, with empirical reviews identifying such issues as primary contributors to suboptimal forecasting performance across business and scientific applications.[25] Non-stationarity in time series—characterized by trends, heteroskedasticity, or unit roots—represents a core data-related challenge, as it violates the constant parameter assumptions of standard models like ARIMA, resulting in divergent forecasts from actual outcomes. Studies confirm that unaddressed non-stationarity produces systematic biases, with failure to test and transform series (e.g., via differencing) linked to poor out-of-sample accuracy in economic and environmental predictions.[26] Preprocessing challenges, such as outliers and high-dimensional noise, compound these effects, as evidenced in surveys of time series methods where inadequate handling correlates with elevated mean squared errors.[27] External causes primarily involve structural breaks and exogenous shocks that disrupt the data-generating process, introducing regime shifts unforeseen by historical patterns. These include sudden events like financial crises, policy changes, or pandemics, which alter relationships between variables and render pre-break models obsolete. For instance, volatility models ignoring such breaks, as in GARCH applications, exhibit weakened predictive power and inflated persistence estimates during periods of market disruption.[28] Empirical taxonomies of forecast errors attribute a substantial portion to these breaks, with techniques like Bayesian updating proposed to mitigate but often insufficient without real-time detection.[29] Specific shocks, such as energy price surges, have driven large errors in macroeconomic forecasts; post-pandemic inflation projections in the euro area deviated markedly due to unmodeled supply disruptions, highlighting how external volatility amplifies baseline data limitations.[30] In time series contexts, failing to incorporate these via intervention variables or break-point tests leads to non-linear error amplification, as non-stationary shocks induce persistent deviations not captured by linear extrapolations.[31]Strategies for Reduction
Model Improvement Techniques
Causal modeling enhances forecast accuracy by incorporating variables that represent fundamental drivers of the target series, such as economic indicators or physical laws, rather than depending exclusively on autoregressive patterns. In time-series applications, these models surpass extrapolative techniques in two-thirds of 534 comparisons, achieving mean absolute percentage error (MAPE) reductions as high as 72%—for example, in long-range air travel demand projections where econometric specifications accounted for income elasticity and competition effects.[32] Cross-sectionally, causal approaches yield about 10% lower errors than unaided expert judgment across 88% of 136 studies, as they systematically integrate predictive correlates like prior performance metrics in personnel forecasting.[32] Rule-based forecasting refines models by embedding domain-expert-derived rules grounded in causal mechanisms, such as conditional adjustments for trend breaks or seasonality overrides, to tailor predictions to specific contexts. Applied to 90 annual economic series, this method reduced median absolute percentage error (MdAPE) by 13% for one-year ahead forecasts and 42% for six-year horizons compared to unadjusted benchmarks, demonstrating robustness when rules align with verifiable principles like diminishing returns in growth processes.[32] Correcting for non-stationarity—where statistical properties like mean or variance evolve over time—through differencing, cointegration analysis, or transformations like Box-Cox restores model assumptions, preventing spurious regressions and enabling precise parameter recovery that lowers out-of-sample errors. For non-stationary series, failure to address this leads to slowly decaying autocorrelation functions and inflated variance in predictions, whereas proper handling, as in ARIMA differencing, aligns forecasts with the data-generating process for improved short- and medium-term accuracy.[33][34] Exogenous variable integration extends univariate models, such as by augmenting ARIMA with external regressors in ARIMAX frameworks, to capture omitted influences like policy shocks or market inputs, thereby reducing bias from incomplete specifications. This approach mitigates error propagation in multivariate settings by directly modeling interdependencies, with empirical gains evident in scenarios where external factors explain 20-50% of variance beyond endogenous lags.[35][36]Process and Ensemble Methods
Process methods for reducing forecast errors emphasize structured workflows that enhance consistency and adaptability in forecasting pipelines. Forecast reconciliation, particularly in hierarchical or grouped time series, adjusts base forecasts from individual models to ensure coherence across aggregation levels, such as totals and subcomponents, thereby minimizing inconsistencies that amplify errors. Techniques like ordinary least squares (OLS) reconciliation or minimum trace (MinT) optimization achieve this by projecting base forecasts onto a coherent space, with empirical studies demonstrating error reductions of up to 20-30% in retail and economic hierarchies compared to unreconciled forecasts.[37] Iterative updating processes further refine accuracy by incorporating new data into models at regular intervals, automating model refits and forecast generations to capture evolving patterns, as implemented in near-term ecological forecasting systems where weekly iterations reduced mean absolute errors by adapting to recent observations.[38] These methods prioritize process design alignment with end-use, such as integrating domain expertise via structured protocols like Delphi polling, which iteratively aggregates expert judgments to mitigate individual biases.[39] Ensemble methods complement process improvements by aggregating diverse forecasts to leverage collective strengths and hedge against individual model weaknesses, often yielding lower variance and bias. Simple equal-weight averaging of multiple forecasts has been empirically validated to outperform single models across diverse domains, with meta-analyses showing average accuracy gains of 10-15% in economic and judgmental forecasting tasks due to diversification of errors.[32] Advanced variants, such as weighted ensembles or stacking, incorporate performance-based weighting or meta-learners to further optimize combinations, as seen in wind energy applications where hybrid ensembles reduced forecast errors by accounting for uncertainties in deterministic models.[40] In probabilistic settings, ensemble approaches generate distributions rather than point estimates, providing uncertainty quantification that informs error bounds, with reviews confirming robustness improvements in weather and time series forecasting.[41] While effective, ensembles require careful selection of component diversity to avoid correlated errors, and their benefits are most pronounced when base models exhibit low interdependence.[42]Applications Across Domains
In Economics and Business Forecasting
In economics, forecast errors quantify deviations between projected macroeconomic variables, such as GDP growth, unemployment rates, and inflation, and their realized values, enabling evaluation of predictive models used by institutions like the International Monetary Fund (IMF) and central banks. These errors arise from factors including model misspecification, data revisions, and unforeseen shocks, with analyses revealing persistent biases; for example, IMF World Economic Outlook forecasts have shown optimistic biases in GDP growth projections for advanced economies, underestimating downturns by averaging 0.5 to 1 percentage points in several cycles.[43] Post-2020 inflation forecast errors, dissected in IMF studies, averaged substantial underpredictions of headline CPI by 2-4 percentage points in major economies due to overlooked supply-side pressures and policy responses.[44] Such errors inform iterative improvements in econometric techniques, though systemic tendencies toward overprecision persist, as evidenced by NBER research indicating forecasters' underestimation of fiscal multipliers during austerity periods, leading to exaggerated predictions of growth impacts from spending cuts.[45] In business contexts, forecast errors pertain to discrepancies in sales, demand, and revenue projections, critical for inventory optimization, pricing strategies, and capital allocation. Metrics like Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) are standard for assessing accuracy, with MAPE calculating relative errors as \frac{1}{n} \sum \left| \frac{actual - forecast}{actual} \right| \times 100\%, often targeting below 20% for mature product lines but exceeding 50% for volatile categories like fashion goods.[5] Errors here stem from demand variability and competitive dynamics, prompting ensemble methods; for instance, firms using weighted averages of qualitative and quantitative forecasts reduce errors by 10-15% in supply chain applications, per industry benchmarks.[46] Notable cases include retail overforecasting during economic expansions, resulting in excess inventory costs estimated at 1-2% of sales value annually across sectors.[47] Both domains highlight forecast errors' role in decision-making scrutiny, with economic analyses revealing larger errors for longer horizons—up to 3-5 times baseline for multi-year projections—and business practices emphasizing bias detection to counter tendencies like anchoring on recent trends.[11] Empirical reviews underscore that while errors cannot be eliminated, their quantification drives adaptive strategies, such as incorporating scenario analysis to hedge against tail risks observed in events like the 2008 financial crisis, where GDP forecast errors exceeded 2 percentage points globally.[48]In Scientific and Environmental Forecasting
In scientific forecasting, errors arise primarily from uncertainties in model parameterization and initial conditions, particularly in nonlinear dynamical systems where small perturbations can amplify over time. For instance, in numerical weather prediction, root mean square errors for 500 hPa geopotential height forecasts have decreased significantly since the 1980s due to advances in data assimilation and computational power, with a modern five-day forecast matching the accuracy of a one-day forecast from 1980.[49][50] However, persistent errors in sub-seasonal predictions stem from inadequate representation of phenomena like convective processes, leading to systematic biases in precipitation forecasts exceeding 20% in some tropical regions.[51] Environmental forecasting, encompassing weather, hydrological, and ecological projections, exhibits forecast errors influenced by chaotic attractors and incomplete observational networks. Historical analyses show that 24-hour temperature forecast errors have improved from about 2-3°C in the mid-20th century to under 1°C in many mid-latitude areas by 2020, driven by ensemble methods and satellite data integration.[49] Yet, in flood-prone events, errors in peak discharge predictions can reach 50% or more, as seen in case studies of river basin simulations where hydrological model uncertainties compound meteorological inputs.[52] Atmospheric river forecasts for high-impact events, such as California's 2023 storms, reveal multimodel spreads of 20-30% in precipitation totals due to upstream moisture transport variability.[53] Long-term environmental forecasts, particularly climate projections, demonstrate larger relative errors owing to parameterized feedbacks like cloud-aerosol interactions, with empirical evaluations indicating that many general circulation models overestimate decadal warming rates by 0.1-0.2°C per decade in hindcast validations against observations from 1970-2020.[54] [55] While some mid-20th-century models aligned closely with observed global temperature trends after adjustments for radiative forcing, systematic cold biases in polar amplification and equatorial sea surface temperatures persist across CMIP6 ensembles, highlighting limitations in capturing natural variability modes like the Atlantic Multidecadal Oscillation.[56] These discrepancies underscore the challenges of extrapolating beyond validated timescales, where forecast skill drops sharply beyond 10-20 years due to unresolvable internal variability.[57]Notable Examples and Case Studies
Historical Economic Forecast Failures
One prominent example of economic forecast failure occurred in the lead-up to the 1929 stock market crash and the ensuing Great Depression. On October 15, 1929, Yale economist Irving Fisher stated that "stock prices have reached what looks like a permanently high plateau," reflecting widespread optimism among economists who anticipated only minor corrections rather than a severe downturn.[58] This view underestimated the speculative bubble fueled by margin debt and overleveraged investments, as the Dow Jones Industrial Average plummeted 89% from its peak by July 1932, with real GDP contracting by approximately 30% between 1929 and 1933.[58] The failure stemmed from inadequate attention to financial vulnerabilities and banking fragilities, which amplified the initial crash into a prolonged depression through widespread bank runs and credit contraction.[58] In the 1970s, macroeconomic forecasts reliant on the traditional Phillips curve framework proved inadequate in anticipating stagflation, characterized by simultaneous high inflation and unemployment. The Phillips curve posited an inverse relationship between inflation and unemployment, leading policymakers and forecasters to expect that rising unemployment—reaching 9% by 1975—would curb inflation, which instead accelerated to double digits, peaking at 13.5% in 1980.[59] This breakdown occurred because models overlooked adaptive inflation expectations and supply shocks, such as the 1973 oil embargo, which shifted the curve upward and invalidated short-run trade-offs.[59] Empirical analyses later confirmed forecast failures in Phillips curve-based projections, with errors arising from unmodeled changes in inflation dynamics and policy responses that accommodated shocks.[60] The 2008 global financial crisis exposed further shortcomings in macroeconomic forecasting, as most professional forecasters and central banks underestimated the housing market's role in systemic risk. Federal Reserve staff projections for 2008-2009 exhibited unusually large errors, with real GDP growth forecasts missing the actual contraction of 4.3% in Q4 2008 and unemployment rising to 10% far beyond anticipated levels.[61] Surveys of economists revealed slow recognition of the crisis, with textual analyses of academic publications showing delayed acknowledgment of housing boom mispricings and leverage buildup until after Lehman Brothers' collapse on September 15, 2008.[62] These errors reflected overreliance on equilibrium models that downplayed tail risks from subprime mortgages and financial interconnections, contributing to a consensus forecast of mild slowdown rather than deep recession.[61]| Event | Key Forecast Error | Actual Outcome | Primary Model Flaw |
|---|---|---|---|
| 1929 Crash & Depression | Permanent high plateau; minor correction expected | 89% market drop; 30% GDP contraction | Ignored leverage and banking risks[58] |
| 1970s Stagflation | Unemployment rise to reduce inflation via Phillips curve | Inflation to 13.5% amid 9% unemployment | Failed to incorporate expectations and supply shocks[59] |
| 2008 Crisis | Mild slowdown; low recession probability | 4.3% Q4 GDP drop; 10% unemployment | Underestimated housing and leverage tail risks[61] |