Fact-checked by Grok 2 weeks ago

Forecasting

Forecasting is the process of predicting future events or conditions based on the analysis of historical and current data, often employing systematic methods to support and planning in uncertain environments. This practice spans multiple disciplines, including , , , and , where it helps organizations anticipate demand, allocate resources, and mitigate risks by examining trends, cycles, and patterns in data. At its core, forecasting methods are broadly classified into qualitative and quantitative approaches. Qualitative techniques rely on expert judgment, intuition, and consensus-building tools like the , which aggregates anonymous opinions from specialists to forecast scenarios when historical data is scarce or unreliable. In contrast, quantitative methods use mathematical and statistical models applied to data—sequences of observations recorded at regular intervals—to generate numerical predictions, such as through trend extrapolation, , or autoregressive integrated moving average () models. Recent advancements have integrated algorithms, including neural networks and hybrid statistical- models, which have demonstrated superior performance in competitions like the M5 Forecasting Accuracy Competition by improving accuracy in complex, hierarchical datasets. Despite these innovations, challenges persist, including the inherent uncertainty of long-term predictions, the influence of unforeseen events, and barriers to adoption such as inadequate and organizational resistance, with only about 28% of businesses consistently employing systematic forecasting practices. Evaluations from benchmarks like the M3-Competition highlight that simpler methods often rival complex ones in accuracy, while combining approaches and monitoring forecast errors are essential for reliability across varying horizons.

Overview

Definition and Scope

Forecasting is the process of making predictions about future events or conditions based on historical data, patterns, and models. It involves analyzing past trends to anticipate outcomes, serving as a foundational tool for anticipating changes in various systems. A core principle of forecasting is the handling of inherent , as future events cannot be predicted with absolute certainty due to unpredictable factors and incomplete . Forecasts may be deterministic, providing a single predicted value, or probabilistic, offering a of possible outcomes with associated probabilities to quantify uncertainty. Additionally, forecasting horizons vary: short-term forecasts cover periods up to one year and are generally more accurate due to reliance on recent data, while long-term forecasts extend beyond a year and face greater uncertainty from potential disruptions or "shocks" in underlying patterns. Forecasting encompasses an interdisciplinary scope, playing a vital role in , , and across fields such as , , and . For instance, it informs for preparations and sales estimation for , without specifying detailed techniques. Basic terminology includes point forecasts, which estimate a single value; interval forecasts, which provide a range likely to contain the actual outcome; and , a method for exploring multiple plausible future paths by considering alternative "" events and key drivers.

Historical Development

The roots of forecasting trace back to ancient civilizations, where systematic observations of natural phenomena enabled predictions essential for and governance. In around 2000 BCE, Babylonian astronomers recorded celestial movements to forecast seasonal changes, developing lunar calendars that anticipated floods, harvests, and eclipses for societal planning. Early emerged in the same region through omen texts, such as those on clay tablets from the BCE, which interpreted natural signs like animal births or weather patterns to predict market fluctuations, royal fortunes, and trade outcomes. Advancements in the 18th and 19th centuries laid the mathematical foundations for , shifting from qualitative to quantitative methods. Pierre-Simon Laplace's 1774 memoir introduced inverse probability, allowing predictions of causes from observed effects, which influenced later in forecasting uncertain events like or population trends. contributed through his work on the normal distribution around 1809, providing tools for error analysis in predictions. Adolphe Quetelet's 1835 treatise Sur l'homme et le développement de ses facultés, ou Essai de physique sociale pioneered analysis in social contexts, applying probability to aggregate data on rates and births to forecast societal patterns under his "social physics" framework. The 20th century marked the formalization of statistical forecasting techniques, driven by wartime needs and postwar economic reconstruction. Post-World War II, econometric models proliferated, with Jan Tinbergen's 1936-1946 work evolving into large-scale systems like Lawrence Klein's 1950s models, which integrated economic theory with statistical estimation to forecast GDP, inflation, and employment for policy-making. In 1957, Charles Holt introduced , a method weighting recent observations more heavily to predict trends in inventory and demand, building on Robert G. Brown's 1950s advocacy of adaptive moving averages for . George Box advanced statistical forecasting in the 1960s through collaborative research on stochastic processes, culminating in the 1970 Box-Jenkins methodology for ARIMA models, which systematically identified, estimated, and validated for accurate short-term predictions. The advent of computers in the revolutionized forecasting by enabling complex simulations and iterative computations previously infeasible by hand. Mainframe systems facilitated the implementation of and econometric models on large datasets, allowing real-time updates and scenario analysis in fields like and , thus transitioning forecasting from manual calculations to automated, scalable processes.

Applications

Economic and Financial Forecasting

Economic and financial forecasting involves predicting macroeconomic trends and market behaviors to inform decisions, formulation, and . In , forecasters analyze key indicators to anticipate shifts in , prices, and , while in , the focus extends to asset valuations and . These predictions rely on historical , econometric models, and leading signals to project outcomes over short to medium terms, aiding stakeholders in navigating uncertainties like recessions or booms. A core aspect of economic forecasting centers on key indicators such as (GDP), , and rates. Forecasters use leading indicators, including the , to signal future changes; for instance, declining consumer expectations often precede slower GDP growth and rising unemployment. The 's Leading Economic Index (LEI), which incorporates components like consumer expectations for business conditions and stock prices, provides an early warning of turning points, typically leading GDP by about seven months. In August 2025, the LEI fell to 98.4 (2016=100) with a 2.8% six-month decline, prompting projections of 1.6% U.S. GDP growth for 2025, down from 2.8% in 2024. More recently, in September 2025, the LEI declined by an additional 0.3%, with The updating its 2025 U.S. GDP growth projection to 1.8% as of October 2025. In financial markets, forecasting extends to stock prices, currency exchange rates, and . Stock price predictions frequently integrate economic indicators like GDP growth and rates, as stronger economic performance correlates with rising equity valuations; for example, leading indicators such as the help anticipate trends by signaling expansions or contractions. Currency forecasts employ methods like relative economic strength, which assesses GDP differentials and rates to predict appreciation—for instance, higher U.S. growth relative to may strengthen the USD against the CAD. Risk assessment in commonly uses (VaR) models, which estimate the potential loss in a portfolio's value over a specified period at a given level, such as a 5% chance of exceeding a $1 million loss in one day based on historical volatility and correlations. VaR has become a standard tool for banks and regulators to quantify , though it assumes distributions and may underestimate tail events. Central banks and governments leverage these forecasts for policy decisions, including interest rate adjustments and fiscal planning. The Federal Reserve's (FOMC) projections, updated quarterly, guide ; in September 2025, median forecasts anticipated a of 3.6% for 2025, with GDP growth at 1.6%, at 4.5%, and PCE inflation at 3.0%. These estimates inform rate cuts or hikes to balance growth and . For fiscal planning, the (CBO) provides baseline projections for budgets, estimating—as of September 2025—real GDP growth of 1.4% in 2025 and 2.2% in 2026, with the rate at 4.5% in the fourth quarter of 2025 falling to 4.2% in 2026, to evaluate impacts and from taxes like and corporate levies. Such forecasts underpin decisions on spending and taxation, ensuring alignment with economic capacity. Case studies highlight both successes and limitations in economic and financial forecasting. During the 2008 financial crisis, forecasters largely failed to predict the downturn; Federal Reserve staff projections for 2008-2009 showed unusually large errors, with real GDP growth overestimated by over 3 percentage points and unemployment underestimated, due to overreliance on models ignoring housing bubble risks and financial interconnections. This led to delayed policy responses, exacerbating the recession. In contrast, post-2020 quantitative easing (QE) decisions by the Federal Reserve were informed by inflation and growth forecasts, though underestimations of housing inflation—projected to fall to 0% by mid-2024 but persisting at 4-5% into 2025—prolonged elevated CPI above the 2% target, influencing the timing of rate hikes and balance sheet normalization. QE's $1.33 trillion in mortgage-backed securities purchases from 2020-2022 boosted home values by an average $100,000, amplifying demand and inflation via wealth effects estimated at $480-840 billion. Essential data sources for these forecasts include standardized economic datasets like the CPI and (). The CPI, compiled by the , tracks consumer price changes monthly to gauge trends, serving as a primary input for models predicting erosion. PMI indices, produced by from surveys of over 28,000 companies across 40+ countries covering 90% of global GDP, offer real-time insights into manufacturing and services activity; readings above 50 indicate expansion, enabling forecasters to nowcast GDP and employment shifts ahead of official releases. These indices, alongside coincident measures like industrial production, ensure robust, timely inputs for accurate projections.

Scientific and Environmental Forecasting

Scientific forecasting encompasses the application of predictive models to understand and anticipate phenomena in the natural world, particularly in , , and environmental sciences, where chaotic dynamics and vast datasets pose unique challenges. These efforts rely on integrating observational data with computational simulations to generate short- to long-term projections, informing disaster preparedness and policy decisions. Unlike deterministic economic models, scientific forecasts often incorporate probabilistic elements to account for inherent uncertainties in complex systems. Weather forecasting primarily involves short-term predictions, typically spanning hours to days, achieved through (NWP) models that solve governing atmospheric equations using current observations such as temperature, pressure, and wind data. These models, run on supercomputers, simulate atmospheric evolution but are limited by the chaotic nature of weather systems, where small initial errors can amplify rapidly, as demonstrated by Edward Lorenz's 1963 work on sensitivity to initial conditions. To address this, ensemble forecasting techniques generate multiple simulations by perturbing initial conditions or model parameters, providing a range of possible outcomes and quantifying uncertainty in probabilistic terms. For instance, the European Centre for Medium-Range Weather Forecasts (ECMWF) employs an ensemble prediction system that has enhanced reliability for mid-latitude forecasts by capturing flow-dependent error growth. Climate modeling extends these principles to long-term projections, focusing on decades to centuries of global and regional changes driven by . The (IPCC), in its Sixth Assessment Report (AR6), uses (SSPs) and representative concentration pathways (RCPs) to simulate scenarios, indicating that under current policies, is projected to reach 1.5°C above pre-industrial levels in the early 2030s, with temporary exceedances already observed in recent years such as 2023–2024, and severe impacts on ecosystems and human societies if exceeded. These models integrate coupled ocean-atmosphere-land systems to project variables like temperature rise and sea-level changes, emphasizing the need for emission reductions to limit warming to well below 2°C as per the . Environmental forecasting applies similar methodologies to predict outbreaks and , aiding in risk mitigation. For epidemic outbreaks, compartmental models like (susceptible-infected-recovered) were adapted in 2020 to forecast spread, incorporating mobility data and intervention effects, though challenges arose from incomplete reporting and behavioral uncertainties. In natural disaster contexts, leverages hydrological models driven by predictions, enabling early warnings through river gauge and data integration. Earthquake risk assessment, however, remains probabilistic rather than deterministic, using maps to estimate long-term probabilities since precise short-term predictions are infeasible due to irregular fault dynamics. Central to these forecasts is the integration of diverse data sources, including for real-time global coverage of patterns, surface temperatures, and health, which enhances NWP initialization and . Ground-based sensor networks, such as weather stations and buoys, provide high-resolution local measurements that complement satellite data, enabling blended datasets for improved model accuracy in systems. Despite these advances, forecasting in environments faces persistent challenges, including the of errors in nonlinear atmospheric dynamics, as quantified by Lorenz's attractor models, which limit predictability to about two weeks for weather. Recent developments have notably improved hurricane path forecasts through refined ECMWF models, with track error reductions of approximately 20-30% for 3-5 day leads since the early 2010s, attributed to higher-resolution simulations and better from satellites and aircraft reconnaissance. These enhancements, including ensemble-based probabilistic tracks, have increased confidence in predicting storm trajectories, reducing evacuation uncertainties in vulnerable regions.

Social and Policy Forecasting

Social and policy forecasting encompasses the of societal dynamics, human behaviors, and outcomes to guide public and . This field integrates demographic , behavioral patterns, and variables to anticipate changes in structures, trends, and institutional responses. Unlike economic or environmental forecasting, it emphasizes human-centric factors such as cultural shifts and considerations, often relying on longitudinal sociological datasets and scenario-based modeling to project future scenarios for policymakers. Demographic forecasting plays a central role in social and policy planning by projecting and patterns that influence resource needs and urban infrastructure. According to the ' Prospects 2024, the global is estimated at 8.2 billion in 2024 and is projected to peak at 10.3 billion in 2084 before declining slightly to 10.2 billion by 2100, driven by declining fertility rates in most regions. forecasts highlight international movements as a key driver of changes, with the UN projecting to be the primary factor sustaining growth in 62 countries and areas through 2100, particularly in aging societies like those in and . These projections inform policies on , healthcare, and labor markets, enabling governments to prepare for shifts such as increased inflows from climate-vulnerable regions. In policy planning, forecasting aids in anticipating outcomes and scenarios to optimize strategies. forecasting models, which aggregate polling data and socioeconomic indicators, have been used to predict voter behavior with varying accuracy; for instance, econometric approaches incorporating economic conditions have successfully anticipated U.S. presidential results in over 70% of cases since 1948 when applied close to dates. In , coverage forecasts guide campaigns; the 2023 estimates that diphtheria-tetanus-pertussis (DTP3) coverage will reach 90% globally by 2030 under optimistic scenarios, while measles-containing (MCV2) second-dose coverage may lag at around 80%, highlighting gaps in low-income regions. Such predictions support targeted interventions, like resource allocation during pandemics, to achieve . Forecasting social behaviors extends to consumer trends and crime rates, drawing on sociological data to predict shifts in societal norms and risks. Consumer trend projections often utilize surveys and social indicators to anticipate changes in spending patterns; for example, analyses of demographic and lifestyle have forecasted rising demand for among , influencing policy on and environmental regulations. Crime rate predictions incorporate sociological models like analyses of socioeconomic factors, with studies showing that variables such as and levels can forecast urban fluctuations with up to 90% accuracy over short horizons, aiding community safety planning. Despite its utility, social and policy forecasting faces significant challenges, including ethical concerns and data biases that can undermine equitable outcomes. Predictive policing, which uses crime data to forecast hotspots, raises ethical issues by potentially perpetuating racial disparities, as algorithms trained on historical arrests often over-target minority neighborhoods due to systemic biases in past enforcement. Biases in social data, such as underrepresentation of marginalized groups in surveys, can skew forecasts; for instance, selection biases in population samples lead to overestimations of stable trends in high-income demographics while underpredicting volatility in underserved communities. Addressing these requires transparent data auditing and inclusive methodologies to ensure forecasts promote fairness rather than reinforce inequalities. Practical examples illustrate the application of social forecasting in real-world policy. Urban growth projections support city planning by estimating spatial expansion; the UN's World Urbanization Prospects anticipates that 68% of the global population will reside in urban areas by 2050, up from 55% in 2018, necessitating investments in sustainable to manage in megacities like and . Post-2020, forecasts of trends have reshaped labor policies, with analyses predicting that 35-40% of the U.S. would engage in remote or hybrid arrangements by 2025, influencing urban commuting patterns and office space regulations amid the COVID-19-induced shift. These cases demonstrate how forecasting integrates social insights to foster resilient societies.

Forecasting Methods

Judgmental and Qualitative Methods

Judgmental and qualitative methods in forecasting emphasize expertise, subjective insights, and narrative-based approaches to predict outcomes, particularly in situations where historical is limited, unreliable, or insufficient for capturing complex uncertainties. These techniques draw on the , , and experience of individuals or groups to generate forecasts, often through structured processes that aim to minimize individual biases and foster . Unlike data-driven methods, they prioritize contextual understanding, scenario exploration, and expert consensus to inform in dynamic environments. The is a structured iterative process for eliciting and refining expert opinions to achieve consensus on forecasts, typically involving multiple rounds of anonymous questionnaires followed by controlled feedback on group responses. Developed by the in the 1950s initially to assess the impact of technology on warfare, it has evolved into a widely used tool for long-range forecasting in diverse fields by promoting anonymity to reduce dominance by influential participants and iteration to converge views. Key features include controlled communication to avoid and statistical aggregation of responses, making it effective for topics with high uncertainty and sparse data. Scenario planning involves constructing multiple plausible narratives or "stories" about possible futures to explore uncertainties and test strategic responses, rather than predicting a single outcome. Pioneered by Royal Dutch Shell in the late 1960s and early 1970s, this approach gained prominence when Shell's scenarios anticipated the , enabling the company to better navigate supply disruptions and market volatility compared to competitors. The method typically identifies key driving forces, develops contrasting scenarios, and uses them to challenge assumptions and build organizational resilience, emphasizing narrative depth over probabilistic assignments. Expert judgment relies on the intuitive assessments of knowledgeable individuals to forecast in highly uncertain or novel contexts, where formal data is absent, but human pattern recognition and contextual awareness provide value. Intuition plays a central role by enabling rapid synthesis of incomplete information, particularly in qualitative tasks like identifying emerging risks, though it is prone to cognitive biases such as overconfidence and anchoring. Bias mitigation techniques include pre-mortem analysis, where participants prospectively imagine a forecast failure and work backward to uncover potential causes, as well as structured training, feedback loops, and diverse expert panels to enhance reliability and reduce systematic errors. Analogies in forecasting draw parallels between the situation to be predicted and similar historical events to infer likely developments, providing a qualitative framework for understanding unfamiliar trends through familiar precedents. For instance, comparing a new technology's adoption to past innovations helps estimate without numerical modeling. Qualitative trend extends observed patterns into the future based on expert interpretation of non-quantifiable drivers like shifts or technological momentum, applied when historical data is unavailable or too volatile for statistical extension. These approaches foster creative foresight by leveraging reasoning over rigid calculations. These methods find prominent applications in strategic business planning, where they support long-term amid volatility, as seen in Shell's use of scenarios to guide investments during energy market upheavals. In geopolitical , qualitative techniques like and expert judgment enable the exploration of multiple futures, identification of inflection points, and preparation for disruptions such as trade conflicts or policy shifts, enhancing organizational agility in international operations.

Statistical and Time Series Methods

Statistical and methods form the backbone of quantitative forecasting, relying on historical to identify patterns such as trends, cycles, and residuals for predicting future values. These approaches assume that past behavior provides a reliable basis for , often requiring assumptions of stationarity or to achieve it. Unlike qualitative methods, they emphasize objective, parametric models that can be estimated and validated using statistical techniques. Seminal developments in this area, including and autoregressive models, have been widely adopted in , inventory management, and due to their interpretability and computational efficiency. Naïve approaches serve as fundamental baselines in time series forecasting, providing simple yet effective benchmarks against which more complex models are evaluated. The basic naïve method forecasts future values by repeating the most recent observation, such that the forecast for all horizons equals the last observed value, \hat{y}_{T+h|T} = y_T. This method performs surprisingly well for series with no trend or seasonality and is computationally trivial, making it a standard for assessing model improvements. An extension, the seasonal naïve method, accounts for periodic patterns by setting forecasts to the last observed value from the same season, \hat{y}_{T+h|T} = y_{T+h-m(k+1)}, where m is the seasonal period and k is the integer part of the horizon divided by m; it excels in stable seasonal data like monthly retail sales. These methods highlight the value of simplicity, often outperforming sophisticated alternatives in short-term predictions without structural changes. Moving averages and methods smooth historical data to estimate underlying levels, trends, and components, with weights decreasing for older observations to emphasize recent information. Simple moving averages compute forecasts as the average of the previous k observations, \hat{y}_{T+h|T} = \frac{1}{k} \sum_{j=0}^{k-1} y_{T-j}, which is useful for but lags in responding to shifts. builds on this by applying exponentially decaying weights, starting with single exponential smoothing for level-only series: \hat{y}_{T+h|T} = \ell_T, where the level \ell_t = \alpha y_t + (1-\alpha) \ell_{t-1} and \alpha is the smoothing parameter between 0 and 1. For series with trend, Holt's linear method adds a trend component: level L_t = \alpha y_t + (1-\alpha)(L_{t-1} + T_{t-1}), trend T_t = \beta (L_t - L_{t-1}) + (1-\beta) T_{t-1}, with forecast \hat{y}_{T+h|T} = L_T + h T_T and \beta as the trend smoothing parameter. The Holt-Winters method extends this to include seasonality in additive form, incorporating a seasonal factor S_t = \gamma (y_t - L_t) + (1-\gamma) S_{t-m}, where \gamma is the seasonal smoothing parameter and m is the period; forecasts then become \hat{y}_{T+h|T} = L_T + h T_T + S_{T+h-m(k+1)}. Introduced by Winters in , this method remains a cornerstone for seasonal forecasting in applications like demand planning, balancing responsiveness with stability through parameter selection via optimization of forecast errors. ARIMA models, part of the Box-Jenkins methodology, provide a flexible framework for univariate forecasting by combining autoregression, , and moving averages to handle non-stationarity and dependencies. The approach involves model identification through , via maximum likelihood, diagnostic checking for residuals, and forecasting; it requires differencing the series d times to achieve stationarity, where (1-B)^d y_t denotes the differenced and B is the backshift operator. The general (p,d,q) model is expressed as \phi(B)(1-B)^d y_t = \theta(B) \epsilon_t, where \phi(B) = 1 - \phi_1 B - \cdots - \phi_p B^p is the autoregressive polynomial of order p, \theta(B) = 1 + \theta_1 B + \cdots + \theta_q B^q is the polynomial of order q, and \epsilon_t are errors. Developed by and Jenkins in their 1970 book, this methodology revolutionized by emphasizing iterative model building, and remains prevalent in , such as GDP predictions, due to its ability to capture short-term dynamics. Regression-based methods extend forecasting by incorporating explanatory variables alongside temporal patterns, modeling relationships through linear forms like y_t = \beta_0 + \beta_1 x_t + \epsilon_t, where y_t is the target, x_t predictors (e.g., lagged values or covariates), and \epsilon_t errors assumed independent. For dynamic regression, autoregressive terms can be added to handle serial correlation, such as ARIMAX models that augment with external regressors. These approaches are particularly valuable in relational forecasting, like predicting from spend, where coefficients \beta are estimated via ordinary , providing interpretable impacts while controlling for trends via inclusion of time as a variable. Drift and deterministic methods focus on extrapolating observed trends without assuming complex stochastic processes, treating the series as following a linear path. The drift method estimates a constant rate of change from the overall series slope, forecasting as \hat{y}_{T+h|T} = y_T + h \frac{y_T - y_1}{T-1}, effectively extending a straight line from the first to last observation. Deterministic linear trend models fit y_t = \beta_0 + \beta_1 t + \epsilon_t via regression, using the fitted line for extrapolation, which suits long-term projections in stable environments like population growth. These methods, simple extensions of naïve baselines, are robust baselines for trending data and avoid overfitting in sparse datasets.

Machine Learning and AI Methods

Machine learning and methods in forecasting leverage algorithms to identify complex, non-linear patterns in high-dimensional data, offering advantages over traditional statistical approaches that often rely on linear, parametric assumptions. These techniques excel in handling large-scale, unstructured datasets where relationships between variables are intricate and non-stationary, such as in financial markets or sensor networks. By learning hierarchical representations from data, ML models can capture temporal dependencies and interactions that simpler models overlook, leading to improved accuracy in scenarios with abundant computational resources. Neural networks, particularly (LSTM) architectures, are widely used for sequential data forecasting due to their ability to manage long-term dependencies through recurrent layers equipped with gating mechanisms that regulate information flow. Introduced in , LSTMs address the in standard recurrent neural networks by incorporating input, forget, and output gates, enabling effective modeling of with lags exceeding 1,000 steps. In forecasting applications, LSTMs have demonstrated superior performance in predicting volatile sequences, such as stock prices or energy demand, by preserving historical context without exponential error decay. Ensemble methods aggregate multiple weak learners to enhance prediction robustness, with random forests and boosting algorithms like XGBoost being prominent for their handling of feature importance and non-linear interactions. Random forests, proposed in 2001, construct numerous decision trees on bootstrapped data subsets with random feature selection, reducing overfitting and providing variable importance metrics that reveal key predictors in forecasts like sales or weather variables. XGBoost, developed in 2016, extends gradient boosting by optimizing tree structures through second-order approximations and regularization, achieving state-of-the-art results in high-dimensional forecasting tasks, such as demand prediction, with up to 10-20% error reductions over single trees in benchmark datasets. Deep learning extensions, including convolutional neural networks (CNNs) for spatiotemporal data and transformers for long-sequence , further advance forecasting by capturing spatial and temporal hierarchies. CNNs, adapted for traffic forecasting in 2017 models like spatiotemporal recurrent convolutional networks, process grid-like inputs to extract local patterns in dynamic environments, improving short-term predictions by 15-25% over baseline RNNs in urban mobility scenarios. Post-2017 transformer-based innovations, such as Autoformer (2021), decompose series into trend and seasonal components using auto-correlation mechanisms instead of full self-attention, enabling efficient long-term forecasting with quadratic complexity reductions and accuracy gains of 10-38% on datasets like load. Recent innovations emphasize privacy and interpretability in ML forecasting. , gaining traction since 2020, enables collaborative model training across decentralized devices without sharing raw data, preserving privacy in applications like risk prediction; for instance, a framework reduced data exposure while maintaining forecast accuracy comparable to centralized models. Explainable AI (XAI) techniques, integrated into forecasting since the early 2020s, use methods like SHAP values to attribute predictions to input features, enhancing trust in black-box models for financial by quantifying variable contributions and reducing opacity in high-stakes decisions. As of , large language models (LLMs) have emerged as a key advancement in forecasting, particularly for event-based and predictions. These models integrate textual data, such as news events, with through techniques like event analysis and predictive feedback mechanisms, improving accuracy in volatile scenarios like geopolitical or market events. For example, LLM-driven frameworks enable massive training on diverse datasets for long-term event forecasting, while approaches combine with auxiliary modalities like text or images, achieving notable gains in complex, real-world applications. Hybrid approaches combine with traditional statistical methods to leverage their strengths, such as integrating residuals into neural networks for refined error correction in non-stationary series. These hybrids, exemplified in 2023 runoff forecasting models, process petabyte-scale datasets by using statistical components for trend and for non-linear residual modeling, yielding 12-18% improvements in metrics like over pure or statistical baselines in environmental and economic contexts. integration in hybrids further scales to applications, such as integrating sensor streams with ensemble learners for dynamic urban forecasting.

Evaluation and Accuracy

Measures of Forecast Accuracy

Forecast accuracy measures quantify the discrepancy between predicted values and actual outcomes, enabling the of forecasting models across various domains. These metrics are essential for comparing model , selecting appropriate methods, and guiding improvements in predictive systems. They can be broadly categorized into point forecast errors, which assess single-value predictions, and probabilistic metrics, which evaluate distributions or intervals. Selection of a metric depends on the , error desired, and whether relative or is needed. Scale-dependent error metrics provide absolute measures of forecast error but are not comparable across series with different units or scales. The Mean Absolute Error (MAE) calculates the average magnitude of errors without considering their direction, defined as \text{MAE} = \frac{1}{n} \sum_{t=1}^{n} |y_t - \hat{y}_t|, where y_t is the actual value, \hat{y}_t is the forecast, and n is the number of observations; it is intuitive and minimizes the median of forecast errors. The Mean Squared Error (MSE) emphasizes larger errors by squaring deviations, given by \text{MSE} = \frac{1}{n} \sum_{t=1}^{n} (y_t - \hat{y}_t)^2, and it minimizes the mean of forecast errors, though its units are squared, complicating interpretation. The Root Mean Squared Error (RMSE), the square root of MSE, \text{RMSE} = \sqrt{\frac{1}{n} \sum_{t=1}^{n} (y_t - \hat{y}_t)^2}, retains the original data units and is widely used for its balance of sensitivity to outliers and interpretability. Percentage-based errors offer scale-independent assessments but introduce challenges with certain data characteristics. The Mean Absolute Percentage Error (MAPE) expresses errors as percentages of actual values, \text{MAPE} = \frac{100}{n} \sum_{t=1}^{n} \left| \frac{y_t - \hat{y}_t}{y_t} \right|, facilitating comparisons across series; however, it is undefined when y_t = 0 and becomes unstable or misleading for small non-zero y_t, as percentage errors amplify disproportionately. To address scaling issues while benchmarking against simple models, the Mean Absolute Scaled Error (MASE) normalizes absolute errors by the mean absolute error of a naive forecast (one-step ahead using the previous observation), \text{MASE} = \frac{1}{n} \sum_{t=1}^{n} \frac{|y_t - \hat{y}_t|}{\frac{1}{n-1} \sum_{t=2}^{n} |y_t - y_{t-1}|}, where values below 1 indicate superior performance to the naive method, making it robust for non-seasonal data and adaptable to seasonal series via seasonal naive benchmarks. Relative performance metrics contextualize accuracy against baselines. Theil's U statistic compares the RMSE of a model to that of a naive no-change forecast (where each future value equals the last observed value), computed as U = \frac{\sqrt{\frac{1}{n} \sum_{t=1}^{n} (y_t - \hat{y}_t)^2}}{\sqrt{\frac{1}{n} \sum_{t=1}^{n} (y_t - y_{t-1})^2}}, with U < 1 signifying better accuracy than the naive approach, U = 1 equivalent performance, and U > 1 inferior results; it decomposes into bias, variance, and covariance components for deeper analysis. For probabilistic forecasts, which provide uncertainty intervals or distributions rather than point estimates, evaluation focuses on reliability and informativeness. Calibration assesses whether predicted probabilities match observed frequencies, such that events assigned a probability p occur approximately p proportion of the time; reliable calibration ensures trustworthiness in the forecast's confidence levels. Sharpness measures the concentration or narrowness of the predictive distribution, independent of outcomes, with sharper (more precise) forecasts preferred provided they remain well-calibrated, as overly broad distributions convey little new information.

Validation and Testing Techniques

Validation and testing techniques in forecasting ensure that models generalize well to unseen , particularly in time series contexts where temporal dependencies must be preserved to simulate real-world scenarios. These methods emphasize chronological handling to prevent lookahead , where future information inadvertently influences model training. Key approaches include partitioning into training and test sets, specialized cross-validation procedures, simulations, out-of-sample evaluations, and resampling techniques like for . A fundamental step in model validation is splitting the into and sets, with the division performed chronologically to maintain the temporal order of observations. For data, this typically involves allocating earlier periods to (e.g., the first 80% of the data) and reserving later periods for testing (the remaining 20%), adhering to rules like the 80/20 split to mimic prospective forecasting. This chronological partitioning avoids lookahead bias, ensuring that the model does not "see" future values during , which could otherwise inflate performance estimates unrealistically. Cross-validation adapts traditional k-fold methods for by respecting temporal structure, using variants such as rolling-origin (also known as sliding window) and expanding window approaches. In the expanding window method, the set grows incrementally from the initial observations, while the set advances one step at a time; the model is refit on the expanding past data and evaluated on the subsequent future period. The rolling-origin variant maintains a fixed-size window that slides forward, refitting the model at each origin to on the next horizon, providing multiple out-of-sample evaluations across the series. These techniques, recommended for their ability to utilize all available data without violating , have been shown to outperform non-temporal cross-validation in predictor evaluation for tasks. Backtesting simulates a model's historical performance by iteratively applying it to past data in a forward manner, often incorporating walk-forward optimization to assess robustness. In this process, the dataset is divided into in-sample periods for parameter optimization followed by out-of-sample periods for validation, with the window advancing chronologically to generate a sequence of forecasts and evaluate them against actual outcomes. Walk-forward optimization specifically optimizes parameters on a trailing in-sample , then tests on the immediate forward out-of-sample , repeating across the data to detect and ensure the strategy's adaptability over time. This method is particularly valuable in dynamic environments like financial forecasting, where it bridges historical and prospective deployment. Out-of-sample testing complements these procedures by exclusively evaluating models on withheld from and selection, serving as a critical safeguard against . By forecasting on unseen future periods after initial fitting, this technique measures true predictive accuracy, revealing discrepancies between in-sample fit and . Empirical studies demonstrate that out-of-sample comparisons can mitigate biases, with superior performance on holdout sets indicating more reliable models for deployment. Bootstrap methods provide a nonparametric way to estimate confidence intervals for forecast errors by resampling the residuals from a fitted model to generate synthetic series. This involves drawing bootstrap samples with replacement from the observed residuals, refitting the model to create perturbed forecasts, and computing the distribution of errors to derive intervals (e.g., 95% coverage). Unlike parametric assumptions, bootstrapping accommodates non-normal error distributions common in time series, offering robust without strong distributional prerequisites. In practice, it enhances prediction intervals by averaging over multiple resampled paths, improving coverage accuracy in empirical evaluations.

Advanced Topics

Handling Seasonality and Cycles

Seasonality refers to predictable fluctuations in time series data that recur at fixed intervals, such as quarterly spikes in due to periods. These patterns are typically driven by calendar-related factors like , , or fiscal quarters, and they can be isolated through techniques that break down the series into trend (T_t), seasonal (S_t), and irregular or (R_t) components. In additive , suitable for series where seasonal variations are constant in magnitude, the model is expressed as y_t = T_t + S_t + R_t; this approach assumes that the seasonal effect adds a fixed amount to the trend regardless of the overall level. Conversely, multiplicative , appropriate for series where seasonal variations grow proportionally with the trend (e.g., increases), uses y_t = T_t \times S_t \times R_t. Cyclic behavior in forecasting involves longer-term, non-fixed oscillations in data that do not align with a specific period, such as economic booms and busts lasting several years. Unlike , cycles have variable lengths and amplitudes, often spanning 3 to 10 years for business cycles, though longer waves like the Kondratiev waves—hypothesized by in the —extend 40 to 60 years and are linked to major technological innovations driving sustained prosperity followed by decline, although the theory remains controversial and is not widely accepted among economists. These cycles complicate forecasting by introducing irregular turning points that require models to capture underlying economic or structural shifts rather than rigid periodicity. Key modeling approaches for seasonality include the Seasonal Autoregressive Integrated Moving Average (SARIMA) model, which extends the ARIMA framework by incorporating seasonal autoregressive (P), differencing (D), and moving average (Q) terms at a specified lag s (e.g., s=12 for monthly data), denoted as SARIMA(p,d,q)(P,D,Q)_s. Developed in the Box-Jenkins methodology, SARIMA accounts for both non-seasonal and seasonal non-stationarities through differencing, making it effective for univariate series with clear periodic patterns. For more flexible periodic components, Fourier analysis decomposes the series into sine and cosine terms at various frequencies, approximating complex seasonality as a sum of harmonics: s_t = \sum_{k=1}^K \left( a_k \cos\left(\frac{2\pi k t}{s}\right) + b_k \sin\left(\frac{2\pi k t}{s}\right) \right), where K is the number of terms and s is the seasonal period. This trigonometric representation is particularly useful when seasonality varies in shape or multiple periods overlap, as in daily data with weekly and annual cycles. Detection of seasonality and cycles often relies on autocorrelation function (ACF) and (PACF) plots, which reveal significant correlations at seasonal lags (e.g., spikes at lag 12 in monthly data indicating yearly patterns). In ACF plots, strong positive autocorrelations decaying gradually at multiples of the seasonal period suggest non-stationary seasonality, while PACF spikes at seasonal lags help identify the order of seasonal AR terms after differencing. For cycles, broader ACF patterns with slower decay beyond seasonal lags can indicate longer-term dependencies, guiding model selection. Representative examples include holiday effects in forecasting, where multiplicative captures surging during end-of-year periods that scales with overall economic trends, and diurnal cycles in , where daily peaks in consumption (e.g., evening hours due to residential usage) are modeled via terms to predict load variations. Brief references to methods like can complement these by applying seasonal adjustments, though detailed implementations fall under broader techniques.

Limitations and Challenges

Forecasting faces fundamental limitations arising from the inherent and nature of many systems, particularly those governed by nonlinear dynamics. In weather , for instance, the Lorenz —a set of differential equations modeling atmospheric convection—demonstrates how small perturbations in initial conditions can lead to exponentially diverging trajectories, rendering long-term forecasts practically impossible beyond approximately two weeks. This sensitivity, often termed , underscores the theoretical bounds on predictability in deterministic yet systems, where even minuscule errors in measurement amplify over time. Data-related challenges further exacerbate these issues, including non-stationarity, where statistical properties like mean and variance change over time, violating assumptions in many forecasting models and leading to unreliable extrapolations. , common in real-world datasets due to sensor failures or incomplete records, introduces additional biases and reduces model robustness, often resulting in imputed values that propagate errors through predictions. Moreover, events—rare, high-impact occurrences that defy probabilistic expectations—pose insurmountable challenges, as traditional models underestimate their likelihood and fail to anticipate their consequences, a concept central to critiques of overreliance on historical patterns. Biases and overfitting represent practical hurdles in both human-driven and algorithmic approaches. Judgmental forecasting is prone to cognitive biases, such as and anchoring, where forecasters overweight recent or salient information, systematically skewing estimates away from objective probabilities. In models, occurs when algorithms capture noise rather than underlying patterns, yielding high accuracy on training data but poor to new scenarios, particularly in volatile environments. Structural breaks—abrupt shifts in data-generating processes, like changes—compound these problems by invalidating model parameters post-event, leading to biased forecasts if undetected. Ethical concerns amplify the risks of forecasting, especially with integration. Privacy violations arise from the vast required for accurate forecasts, raising issues of and in applications like or health trend modeling. Misuse in policy contexts can perpetuate , as biased algorithms—trained on skewed datasets—disproportionately target marginalized groups, exacerbating inequalities in or . Domain-specific constraints, such as those in climate modeling, highlight computational boundaries tied to equations. Even with modern exascale supercomputers such as (as of 2025), solving the Navier-Stokes equations at sufficient resolution limits ensemble sizes to hundreds of members, constraining the sampling of and probabilistic in global simulations. These limits persist despite hardware advances, as the exponential growth in required compute power outpaces hardware scaling for high-fidelity projections.

Improvements and Future Directions

Strategies for Enhancing Forecasts

Ensemble forecasting involves combining predictions from multiple models to improve overall accuracy and reduce errors, particularly by mitigating the variance inherent in individual models. Techniques such as simple averaging or weighted ensembles aggregate outputs from diverse forecasting methods, leading to more stable predictions by balancing out idiosyncratic errors. For instance, in time series applications, ensembles have been shown to outperform single models by extracting complementary strengths and weakening structural assumptions, as demonstrated in practical implementations for . Updating and feedback mechanisms enhance forecasts through adaptive processes that incorporate new data in , allowing models to evolve with changing conditions. Bayesian updating, a key approach, revises prior probabilities based on incoming to produce posterior distributions that reflect updated beliefs, enabling more responsive predictions in dynamic environments. This method has been applied effectively in structural reliability assessments and financial forecasting, where it adjusts model parameters iteratively to minimize discrepancies between predictions and observations. Superforecasting techniques, developed through extensive research on probabilistic , emphasize skills like breaking down complex questions into tractable components, actively seeking disconfirming , and aggregating judgments to outperform average forecasters. Philip Tetlock's work, including , identified "superforecasters" who achieved roughly 30% better accuracy than typical experts by fostering a mindset of continuous and numerical precision in estimates. These methods, detailed in Tetlock's 2015 analysis, promote probabilistic thinking over binary outcomes and collaborative deliberation to refine forecasts systematically. Futarchy and prediction markets leverage economic incentives to elicit accurate forecasts by allowing participants to bet on outcomes, aggregating through market prices that reflect implied probabilities. The Iowa Electronic Markets (IEM), operational since , exemplify this by trading contracts on events like elections, where market predictions have surpassed polling accuracy in 74% of cases across U.S. presidential races from to 2004. This incentive-aligned aggregation reduces biases and enhances reliability, as traders' financial stakes encourage informed participation and rapid information incorporation. Training programs for forecasters focus on exercises to align subjective confidence levels with actual accuracy, alongside tools that support rigorous analysis. Calibration training involves repeated practice with feedback on past estimates, helping experts avoid overconfidence; studies show it improves probabilistic judgments, as seen in where trained participants achieved near-perfect alignment after sessions. Software like R's forecast package aids this by providing automated tools for modeling, including and , enabling users to generate, evaluate, and refine forecasts efficiently. One of the most promising advancements in forecasting involves the integration of for complex optimization problems. Early applications have focused on , where quantum annealing systems like those from D-Wave enable faster processing of vast combinatorial datasets compared to classical methods. For instance, pilots since 2023 have demonstrated quantum processors handling real-world financial forecasting scenarios for dynamic . Business leaders anticipate significant returns, with over 25% expecting at least $5 million in ROI within the first year of adopting quantum optimization for predictive tasks. Sustainable forecasting has gained traction through climate-resilient models that incorporate (ESG) data to enhance long-term predictions. These models use to integrate ESG metrics with traditional , enabling better assessment of risks like supply disruptions from climate events. A key initiative is the ' for Climate Action Innovation Factory, launched in 2024, which promotes AI-driven tools for (SDGs), including improved forecasting for deployment and emissions tracking under SDG 13 (). Frameworks for evaluating AI-enabled ESG performance further support this by fusing with multimodal inputs to predict outcomes. Real-time and edge AI are revolutionizing IoT-enabled predictions, particularly in forecasting, by processing at the source to minimize . Post-2022 developments in infrastructure have accelerated this, allowing edge devices to generate instantaneous forecasts for and . For example, 5G-edge integrations enable proactive , with monitoring in industrial settings. The edge AI market, projected to grow from $20.78 billion in 2024 to $66.47 billion by 2030, underscores this shift, driven by proliferation and low- needs. Multimodal data fusion represents a leap in handling diverse inputs for forecasting, combining text, images, numerics, and to capture nuanced patterns. Since 2023, GPT-like large language models have been adapted for narrative forecasting, where textual descriptions and visual data inform probabilistic predictions. A notable approach uses modality-specific experts in unified architectures to model interleaved text and , improving accuracy in domains like . This fusion extends to tabular and textual data via , enabling generative models to synthesize forecasts from unstructured sources. Globally, collaborative forecasting platforms have surged post-pandemic, emphasizing hybrid methods that blend human expertise with . Good Judgment Open, expanded in 2024 with new challenges on , , and , has engaged thousands in crowd-sourced predictions, outperforming benchmarks through teams. These platforms address gaps in traditional models by incorporating diverse, real-time inputs, fostering resilient hybrid approaches for uncertain environments.

References

  1. [1]
    [PDF] 1 1. INTRODUCTION 1.1 Forecasting Forecasting is an activity to ...
    1. INTRODUCTION. 1.1 Forecasting. Forecasting is an activity to calculate or predict some future event or condition, usually as a result of rational study or ...
  2. [2]
    [PDF] Research Library An Overview of Forecasting Methodology
    Its purpose is to identify the trends and cycles in the data so that appropriate model can be chosen. The most common mathematical models involve various forms ...
  3. [3]
    Business forecasting methods: Impressive advances, lagging ... - NIH
    Dec 14, 2023 · We define systematic forecasting as the application of appropriate statistical and algorithmic methods to available historical data, while ...
  4. [4]
    Business Forecasts Are Reliably Wrong — Yet Still Valuable
    Mar 8, 2022 · Forecasts are predictions about what will happen in the future based on information currently available. As such, they are exercises of imagination.
  5. [5]
    1.1 What can be forecast? | Forecasting: Principles and Practice (3rd ...
    1.1 What can be forecast? · how well we understand the factors that contribute to it; · how much data is available; · how similar the future is to the past; ...Missing: hyndman | Show results with:hyndman
  6. [6]
    An overview of health forecasting - PMC - PubMed Central
    The demand for a health forecast determines the forecast horizon (range), and this could be in a short, medium or long term. There are no clearly defined ...
  7. [7]
    An overview of deterministic and probabilistic forecasting methods of ...
    Jan 20, 2023 · This article aims to provide a systematic review of the existing deterministic and probabilistic wind forecasting methods.
  8. [8]
    1.1 What can be forecast? - OTexts
    1.1 What can be forecast? Forecasting is required in many situations: deciding whether to build another power generation plant in the next five years ...Missing: hyndman | Show results with:hyndman
  9. [9]
    [PDF] Forecasting, Naive Methods and Single Equation Models Su ...
    • Point Forecast: Predicts a single number. • Example: The Dow will be 1100 on July 1. • Interval Forecast: Shows a numerical interval in which the actual ...
  10. [10]
    Scenario Planning - an overview | ScienceDirect Topics
    Scenario planning is defined as a strategic tool that involves brainstorming potential 'what if' events and critical factors to create various future scenarios, ...
  11. [11]
    The Earliest Astronomers: A Brief Overview of Babylonian Astronomy
    Sep 18, 2023 · The earliest written records of astronomical measurement and analysis arose with the cradle of civilization in ancient Mesopotamia.
  12. [12]
    Mesopotamian Astronomy - Babylonian and Persian History
    The Mesopotamians were the masters of ancient astronomy, their sophisticated techniques and observations passing on to the Greeks.
  13. [13]
    [PDF] Memoir on the probability of the causes of events - University of York
    LAPLACE'S 1774 MEMOIR event given the causes, and the probability of the existence of each of these is equal to the probability of the event given that ...
  14. [14]
    An unpublished notebook of Adolphe Quetelet at the root of his ...
    Published in 1835, his book On Man: Essay of Social Physics is one of the founding works of sociology and mathematical statistics.
  15. [15]
    [PDF] A Short History of Macro-econometric Modelling - Nuffield College
    Jan 20, 2020 · During World War II, there were also a number of important breakthroughs in econometric theory, methods and models: see in particular Haavelmo ...
  16. [16]
    7.2 Trend methods | Forecasting: Principles and Practice (2nd ed)
    Holt's linear trend method. Holt (1957) extended simple exponential smoothing to allow the forecasting of data with a trend. This method involves a forecast ...
  17. [17]
    Box-Jenkins Forecasting - Overview and Application
    Aug 19, 2021 · In 1970 George Box and Gwilym Jenkins popularized ARIMA (Autoregressive Integrated Moving Average) models in their seminal textbook, Time ...
  18. [18]
    George Box's contributions to time series analysis and forecasting
    Jan 29, 2014 · George Box made significant contributions to many fields of statistics ... Statistical Methods for Forecasting. Wiley: New York, NY, 1983 ...
  19. [19]
    The value added by machine learning approaches in forecasting
    Introduction. When Robert G. Brown (1956) published Exponential smoothing for predicting demand, he opened the modern era of time series forecasting methods.
  20. [20]
    Economic Forecasting: Definition, Use of Indicators, and Example
    Forecasting is based on an analysis of key metrics and indicators, such as unemployment, inflation, sales, consumer confidence, and more. Forecasting is ...
  21. [21]
    US Leading Indicators - The Conference Board
    Sep 18, 2025 · The Conference Board publishes leading, coincident, and lagging indexes designed to signal peaks and troughs in the business cycle for major economies around ...Missing: sources | Show results with:sources
  22. [22]
  23. [23]
    Economic Indicators That Help Predict Market Trends - Investopedia
    May 16, 2025 · A PMI above 50 is said to indicate expansion, while a PMI below 50 signals contraction, making it a key gauge of economic activity and business ...
  24. [24]
    Forecasting Currency Exchange Rates: 3 Essential Methods
    Three common methods for forecasting currency exchange rates are purchasing power parity, relative economic strength, and econometric models. Purchasing power ...
  25. [25]
    [PDF] VALUE AT RISK (VAR) - NYU Stern
    3. There are three key elements of VaR – a specified level of loss in value, a fixed time period over which risk is assessed and a confidence interval. The VaR ...
  26. [26]
    September 17, 2025: FOMC Projections materials, accessible version
    Sep 17, 2025 · Economic projections of Federal Reserve Board members and Federal Reserve Bank presidents, under their individual assumptions of projected ...
  27. [27]
    The Budget and Economic Outlook: 2025 to 2035
    Jan 17, 2025 · In CBO's projections, economic growth cools from an estimated 2.3 percent in calendar year 2024 to 1.9 percent in 2025 and 1.8 percent in 2026 ...
  28. [28]
    The Failure to Forecast the Great Recession
    Nov 25, 2011 · The staff forecasts of real activity (unemployment and real GDP growth) for 2008-09 had unusually large forecast errors relative to the ...
  29. [29]
    Quantitative easing and housing inflation post-COVID | Brookings
    Oct 8, 2025 · This paper examines the impact of quantitative easing undertaken by the Federal Reserve from 2020 ... Post-2020 Inflation Forecast Errors ...
  30. [30]
    Purchasing Managers' Index™ (PMI®) - S&P Global
    PMI data are factual indicators of global economic health based on monthly surveys of business executives covering 45 economies and 30 sectors.Missing: CPI | Show results with:CPI
  31. [31]
    Numerical Weather Prediction
    Numerical Weather Prediction (NWP) uses computer models to process current weather observations to forecast future weather, including temperature and ...
  32. [32]
    [PDF] Chaos and weather prediction January 2000 - ECMWF
    The weather is a chaotic system. Small errors in the initial conditions of a forecast grow rapidly, and affect predictability.
  33. [33]
    Ensemble forecasting | ECMWF
    Abstract, Numerical weather prediction models as well as the atmosphere itself can be viewed as nonlinear dynamical systems in which the evolution depends ...
  34. [34]
    Summary for Policymakers — Global Warming of 1.5 ºC
    An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context ...
  35. [35]
    The challenges of modeling and forecasting the spread of COVID-19
    Modeling and forecasting the spread of COVID-19 remains a challenge. Here, we detail three regional-scale models for forecasting and assessing the course of ...
  36. [36]
    Forecasting natural hazards, performance of scientists, ethics, and ...
    In this paper, I present ideas and considerations on problems that scientists face when attempting to predict natural hazards: landslides in my case.
  37. [37]
    Satellites
    ### Summary of Satellite Imagery Use in Weather and Environmental Forecasting
  38. [38]
    Environmental monitoring: blending satellite and surface data
    To make new leaps in understanding environmental change and to improve prediction we must find intelligent ways to combine satellite data with surface sensors ...
  39. [39]
    Lorenz and the Butterfly Effect - American Physical Society
    A mathematician turned meteorologist named Edward Lorenz made a serendipitous discovery that subsequently spawned the modern field of chaos theory.
  40. [40]
    ECMWF Activities for Improved Hurricane Forecasts in - AMS Journals
    ECMWF tropical cyclone forecasts have improved over the past two decades, both in terms of track error and intensity measured by the central pressure (Yamaguchi ...
  41. [41]
    Immigration is projected to be the main driver of population growth in ...
    Immigration is projected to be the main driver of population growth in 52 countries and areas through 2054 and in 62 through 2100.
  42. [42]
    World Population Prospects 2024
    The 2024 revision also presents population projections to the year 2100 that reflect a range of plausible outcomes at the global, regional and national levels.
  43. [43]
    Forecasting the Presidential Election: What can we learn from the ...
    In the 12 presidential elections since 1948, for example, the leader in June Gallup Polls won 7 times and lost 5. But, as the successful forecasting models have ...
  44. [44]
    (PDF) Consumer Behavior Prediction and Market Application ...
    Aug 9, 2025 · It examines the junction of consumer behavior prediction and market application exploration using social network data analysis.
  45. [45]
    Crime rate prediction in the urban environment using social factors
    During this research we studied three types of predictive models: linear regression, logistic regression and gradient boosting.
  46. [46]
    Pitfalls of Predictive Policing: An Ethical Analysis
    Feb 17, 2022 · The current uses of predictive policing violate the ethical framework of justice and fairness because they perpetuate systemic racism through ...
  47. [47]
    Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries
    Data biases are often evaluated by comparing a data sample with reference samples drawn from different sources or contexts. Thus, data bias is rather a relative ...
  48. [48]
    68% of the world population projected to live in urban areas by 2050 ...
    68% of the world population projected to live in urban areas by 2050, says UN. Today, 55% of the world's population lives in urban areas, a proportion that is ...
  49. [49]
    Remote/Hybrid Work/In-Office Trends and Forecast
    In 2020, we forecast 35-40% of the U.S. workforce would be remotely one or more days a week after the pandemic. That estimate was correct.
  50. [50]
    [PDF] Best Methods and Practices in Judgmental Forecasting - SOA
    Jul 8, 2010 · Perhaps the most common judgmental forecasting method is to ask the opinion of an expert. Although common, this method is perhaps the most error ...
  51. [51]
    Delphi Method | RAND
    The Delphi method was developed by RAND in the 1950s to forecast the effect of technology on warfare. It has since been applied to health care, education, ...
  52. [52]
    [PDF] Delphi Assessment: Expert Opinion, Forecasting, and Group Process
    The Delphi technique originated at The Rand Corporation in the late 1940s as a systematic method for eliciting expert opinion on a variety of topics, including ...
  53. [53]
    Scenarios: Uncharted Waters Ahead
    Since the early 1970s, however, forecasting errors have become more frequent and occasionally of dramatic and unprecedented magnitude.
  54. [54]
    What are the previous Shell scenarios?
    We have been developing possible visions of the future since the 1970s, helping generations of Shell leaders explore ways forward and make better decisions.
  55. [55]
    None
    Summary of each segment:
  56. [56]
    Cognitive Bias Mitigation in Executive Decision-Making - MDPI
    Structured decision-making techniques such as pre-mortem analyses force consideration of multiple scenarios beyond those readily available to memory [56].
  57. [57]
    Forecasting by analogy using the web search traffic - ScienceDirect
    Forecasting by analogy involves a systematic comparison of a technology to be forecast with some earlier technology that is believed to have been similar in all ...Missing: seminal | Show results with:seminal
  58. [58]
    [PDF] Unit 2 Using sales forecasting notes - WJEC
    Extrapolation is a qualitative forecasting method used when historical data is not available for time series forecasts. Qualitative forecasting methods are ...<|separator|>
  59. [59]
    Enhance geopolitical risk assessment with this strategy
    Feb 21, 2025 · By integrating scenario planning with emerging world identification, we can improve how we perceive and prepare for geopolitical risks.Sebastian Petric · Expected Outcomes · Global Risks Report: The Big...
  60. [60]
    [PDF] Strategic Planning and Forecasting Fundamentals
    Rather than seeking commitment to the plan, top management sometimes uses planning as a way to gain control over others.<|control11|><|separator|>
  61. [61]
    Forecasting Sales by Exponentially Weighted Moving Averages
    The paper presents a method of forecasting sales using exponentially weighted moving averages, which is quick, cheap, and responsive to changing conditions.
  62. [62]
    Time series analysis; forecasting and control : Box, George E. P
    Apr 8, 2019 · Time series analysis; forecasting and control ; Publication date: 1970 ; Topics: Feedback control systems -- Mathematical models, Prediction ...
  63. [63]
    Statistical and Machine Learning forecasting methods
    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting.
  64. [64]
    [PDF] LONG SHORT-TERM MEMORY 1 INTRODUCTION
    For instance, in his postdoctoral thesis. (1993), Schmidhuber uses hierarchical recurrent nets to rapidly solve certain grammar learning tasks involving minimal ...
  65. [65]
    [PDF] 1 RANDOM FORESTS Leo Breiman Statistics Department University ...
    A recent paper (Breiman [2000]) shows that in distribution space for two class problems, random forests are equivalent to a kernel acting on the true margin.
  66. [66]
    [1603.02754] XGBoost: A Scalable Tree Boosting System - arXiv
    Mar 9, 2016 · In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art ...
  67. [67]
    Spatiotemporal Recurrent Convolutional Networks for Traffic ... - NIH
    Motivated by the success of CNNs and LSTMs, this paper proposes a spatiotemporal image-based approach to predict the network-wide traffic state using ...
  68. [68]
    Decomposition Transformers with Auto-Correlation for Long-Term ...
    Jun 24, 2021 · This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the ...
  69. [69]
    Federated machine learning for privacy preserving, collective supply ...
    We propose a federated learning approach for collective risk prediction without the risk of data exposure.
  70. [70]
    A Survey of Explainable Artificial Intelligence (XAI) in Financial Time ...
    Jul 22, 2024 · This survey categorizes XAI approaches for financial time series forecasting, aiming to make AI models more understandable and provides a ...
  71. [71]
    Hybrid Statistical and Machine Learning Methods for Daily ... - MDPI
    This paper proposes hybridizations of ML and autoregressive integrated moving average (ARIMA) models to provide a more accurate and general forecasting model ...
  72. [72]
    5.8 Evaluating point forecast accuracy | Forecasting: Principles and Practice (3rd ed)
    ### Summary of Forecast Accuracy Measures from *Forecasting: Principles and Practice (3rd ed.)* (https://otexts.com/fpp3/accuracy.html)
  73. [73]
    U Statistic: Definition, Different Types; Theil's U
    Theil proposed two U statistics, used in finance. The first (U1) is a measure of forecast accuracy (Theil, 1958, pp 31-42); The second (U2) is a measure of ...
  74. [74]
    5.10 Time series cross-validation | Forecasting - OTexts
    Time series cross-validation uses single observation test sets, with training sets prior to the test, and forecast accuracy is averaged over these test sets.Missing: definition | Show results with:definition
  75. [75]
    On the use of cross-validation for time series predictor evaluation
    In this paper, we reviewed the methodology of evaluation in traditional forecasting and in regression and machine learning methods used for time series.
  76. [76]
    Walk-Forward Optimization (WFO) - QuantInsti Blog
    Mar 12, 2025 · Learn how Walk-Forward Optimization (WFO) works, its limitations, and how to implement it for backtesting trading strategies.
  77. [77]
    Can out‐of‐sample forecast comparisons help prevent overfitting?
    Mar 3, 2004 · This paper shows that out-of-sample forecast comparisons can help prevent data mining-induced overfitting. The basic results are drawn from ...
  78. [78]
    5.5 Distributional forecasts and prediction intervals - OTexts
    When a normal distribution for the residuals is an unreasonable assumption, one alternative is to use bootstrapping, which only assumes that the residuals are ...
  79. [79]
    3.2 Time series components | Forecasting: Principles and Practice ...
    For an additive decomposition, the seasonally adjusted data are given by yt−St y t − S t , and for multiplicative data, the seasonally adjusted values are ...
  80. [80]
    Kondratieff Waves: Definition, Past Cycles, How They Work
    Jul 9, 2025 · A Kondratieff Wave is a long-term economic cycle believed to be born out of technological innovation, which results in a long period of prosperity.
  81. [81]
    [PDF] The Box-Jenkins Method - NCSS
    Box - Jenkins Analysis refers to a systematic method of identifying, fitting, checking, and using integrated autoregressive, moving average (ARIMA) time ...Missing: 1970 | Show results with:1970
  82. [82]
    12.1 Complex seasonality | Forecasting: Principles and Practice (3rd ...
    With multiple seasonalities, we can use Fourier terms as we did in earlier chapters (see Sections 7.4 and 10.5). Because there are multiple seasonalities, we ...
  83. [83]
    2.8 Autocorrelation | Forecasting: Principles and Practice (3rd ed)
    Trend and seasonality in ACF plots. When data have a trend, the autocorrelations for small lags tend to be large and positive because observations nearby in ...
  84. [84]
    Missing data is poorly handled and reported in prediction model ...
    Missing data are often poorly handled and reported, even when adopting advanced machine learning methods for which advanced imputation procedures are available.
  85. [85]
    Biases in judgmental adjustments of statistical forecasts
    This paper considers three types of bias: (1) optimism bias, (2) anchoring bias, and (3) overreaction bias. We explore the effects of particular individual ...
  86. [86]
    Ethical concerns mount as AI takes bigger decision-making role
    Oct 26, 2020 · AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest ...
  87. [87]
    How Artificial Intelligence Can Deepen Racial and Economic ...
    Jul 13, 2021 · There is ample evidence of the discriminatory harm that AI tools can cause to already marginalized groups. After all, AI is built by humans ...
  88. [88]
    Achievements in atmospheric sciences by the large-ensemble and ...
    Aug 6, 2025 · This article reviews the outcomes of a three-year project utilizing "Fugaku," Japan's flagship supercomputer, to conduct high-resolution ...
  89. [89]
    Large ensemble climate model simulations - ESD
    Apr 22, 2021 · Single model initial-condition large ensembles (SMILEs) are valuable tools that can be used to investigate the climate system.Missing: limits fluid post-
  90. [90]
  91. [91]
    How DoorDash Built an Ensemble Learning Model for Time Series ...
    Jun 20, 2023 · Making forecasts from multiple models weakens the imposed model structure assumptions from each single model; using an ensemble model extracts ...
  92. [92]
    A Probabilistic Framework for Bayesian Adaptive Forecasting of ...
    Feb 2, 2007 · An adaptive Bayesian updating method is used to assess the unknown model parameters based on recorded data and pertinent prior information.
  93. [93]
  94. [94]
    Superforecasting Explained in Podcasts and Videos - Good Judgment
    Philip Tetlock. This short course covers the foundational principles and techniques of Superforecasting and features discussions with renowned experts ( ...
  95. [95]
    Superforecasting: The art and science of prediction. - APA PsycNet
    In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government ...Missing: techniques | Show results with:techniques
  96. [96]
    Prediction market accuracy in the long run - ScienceDirect
    We compare market predictions to 964 polls over the five Presidential elections since 1988. The market is closer to the eventual outcome 74% of the time.
  97. [97]
    Iowa Electronic Markets: IEM
    Welcome to the IEM! The IEM is an online futures market where contract payoffs are based on real-world events such as political outcomes, ...
  98. [98]
    (PDF) Training for calibration - Academia.edu
    Weather forecasters demonstrate excellent calibration on rain and temperature predictions, highlighting expertise. Calibration training involved 23 sessions ...
  99. [99]
    Automated calibration training for forecasters - Wiley Online Library
    Jun 28, 2023 · In two studies, we investigated the effectiveness of an automated form of calibration training via individualized feedback as a means to improve calibration in ...
  100. [100]
    CRAN: Package forecast
    Apr 8, 2025 · forecast: Forecasting Functions for Time Series and Linear Models. Methods and tools for displaying and analysing univariate time series ...
  101. [101]
    Resources - D-Wave Quantum
    Dynamic Portfolio Optimization with Real Datasets Using Quantum Processors and Quantum-Inspired Tensor Networks.
  102. [102]
    Quantum Computing-The Key to Addressing Today's Complex ...
    This new study published by D-Wave in collaboration with Wakefield Research, highlights the potential for quantum optimization to create value across ...
  103. [103]
    New Study: More Than One-Quarter of Surveyed Business Leaders ...
    Jul 21, 2025 · New Study: More Than One-Quarter of Surveyed Business Leaders Expect Quantum Optimization to Deliver $5M or Higher ROI Within First Year of ...Missing: portfolio forecasts pilots 2023 2024
  104. [104]
    AI for Climate Action Innovation Factory - AI for Good
    The “AI for Climate Action Innovation Factory” is an initiative launched at the AI for Good Summit which took place on 30-31 May 2024, in Geneva, Switzerland.
  105. [105]
    A decision-support framework for evaluating AI-enabled ESG ...
    Jul 4, 2025 · (2024) proposed a new model of ESG performance measurement with fuzzy theory and a multiple logic fuzzy inference system. Thus, their model aims ...
  106. [106]
    Artificial Intelligence and the Sustainable Development Goals
    Apr 30, 2025 · Climate action (SDG 13): AI-driven models improve climate forecasting, support early warning systems and optimize renewable energy deployment.Missing: 2024 | Show results with:2024
  107. [107]
    5G and Edge Computing for Real-Time Supply Chain Automation
    Oct 14, 2025 · Explore how 5G and edge computing enable real-time supply chain automation, robotics, predictive maintenance, and IoT integration.Automated Material Handling... · Network Slicing And Quality... · Companies Driving 5g And...
  108. [108]
    Edge AI Market Size, Share & Growth | Industry Report, 2030
    The global edge AI market size was estimated at USD 20.78 billion in 2024 and is projected to reach USD 66.47 billion by 2030, growing at a CAGR of 21.7% ...
  109. [109]
    [PDF] THE 2025 EDGE AI TECHNOLOGY REPORT | Ceva's IP
    Integrating edge AI with IoT elevates supply chain management from reactive data collection to proactive, intelligent operations. Real-time monitoring,.<|separator|>
  110. [110]
  111. [111]
    Multimodal Data Fusion for Tabular and Textual Data: Zero-Shot ...
    This study introduces the Multimodal Data Fusion (MDF) framework, which fuses tabular data with textual narratives by leveraging advanced Large Language Models ...
  112. [112]
    Good Judgment's 2024 in Review
    Superforecasters always keep score. As we turn to 2025 at Good Judgment Inc, we review 2024 for highlights, statistics, and key developments.
  113. [113]
    Good Judgment® Open
    In the News 2025​​ Test your forecasting mettle with questions about world politics, business, technology, sports, entertainment, and anything else trending in ...Roche AI Challenge · The Economist: The World... · Sign In
  114. [114]
    Good Judgment: See the future sooner with Superforecasting
    The Superforecasters proved 30% more accurate on average than the futures in 2024-2025. The piece also examines Good Judgment's forecasts of Bank of England and ...