Fact-checked by Grok 2 weeks ago

Exponential smoothing

Exponential smoothing is a class of forecasting methods used in time series analysis to predict future values based on a weighted average of past observations, where the weights assigned to older data points decrease exponentially as they become more distant in time. This approach, particularly effective for short-term predictions in data exhibiting no trend or seasonality, is characterized by its simplicity, requiring only a single smoothing parameter, often denoted as α (where 0 < α ≤ 1), which controls the emphasis on recent data. The basic formula for simple exponential smoothing (SES) is \hat{y}_{t+1|t} = \alpha y_t + (1 - \alpha) \hat{y}_{t|t-1}, where y_t is the actual value at time t, and \hat{y}_{t+1|t} is the forecast for the next period. The origins of exponential smoothing trace back to the mid-20th century, with initial developments during for tracking applications, such as systems in fire control. Robert Goodell Brown formalized the method in the 1950s while working for the U.S. Navy, publishing his seminal work Statistical Forecasting for in 1959, which introduced exponential smoothing as a practical tool for demand prediction and inventory management. Brown's approach gained popularity due to its computational efficiency and ability to adapt quickly to changes in data patterns, making it suitable for real-time applications on early computers. Subsequent extensions expanded the method's applicability to more complex time series. In 1957, Charles Holt developed double exponential smoothing to account for linear trends by incorporating a trend component alongside the level, using two parameters: one for the level (α) and one for the trend (β). This allowed forecasts to capture increasing or decreasing patterns in , improving accuracy for series with systematic drifts. Further refinement came in 1960 when Peter Winters proposed triple exponential smoothing, also known as the Holt-Winters method, which adds a seasonal component (parameter γ) to handle periodic fluctuations, making it versatile for with both trend and , such as monthly sales or quarterly economic indicators. Exponential smoothing methods remain widely used today in fields like , , and due to their robustness, low computational demands, and strong empirical performance compared to more complex models for many practical scenarios. Modern implementations often integrate state-space formulations, enabling and handling of uncertainty through prediction intervals. Despite limitations in capturing long-term structural changes or non-linear patterns, these techniques continue to serve as a foundational in literature.

Introduction and Fundamentals

Definition and Purpose

Exponential smoothing is a rule-of-thumb for , in which past observations are assigned exponentially decreasing weights as they recede further into the past, thereby emphasizing more recent in the . This approach produces forecasts as weighted averages of historical observations, where the weighting scheme decays exponentially with time, enabling the method to adapt quickly to changes in the while retaining some influence from earlier values. The primary purpose of exponential smoothing is to estimate the underlying level of a for predicting future values, making it especially effective for short-term horizons where recent patterns are most relevant. It serves as a practical tool in applications requiring responsive predictions, such as in , by providing unbiased and efficient estimates without assuming complex structures in the data. A key assumption of the basic exponential smoothing model is that the lacks systematic trend or , focusing instead on capturing random fluctuations around a stable level; extensions to the method address these features in more complex scenarios. The workflow involves iteratively updating a single smoothed estimate with each new , blending the incoming point with the prior smoothed value to generate the next forecast in a computationally simple manner. Simple exponential smoothing represents the foundational form of this technique.

Historical Development

Exponential smoothing originated in the early 1950s through the work of Robert G. Brown, an analyst at Bell Telephone Laboratories, who developed the technique for and . Brown's initial formulation, known as simple exponential smoothing, drew from his earlier experiences with adaptive tracking models during for the U.S. , but it was specifically adapted for predicting fluctuating demand patterns in inventories. Although Brown's early reports, such as his 1956 Navy document Exponential Smoothing for Predicting Demand, were not published in academic journals, the method remained largely internal to until his influential 1959 book, Statistical Forecasting for Inventory Control, which provided a comprehensive exposition and spurred its adoption in . Independently of Brown, Charles C. Holt developed an extension of exponential smoothing in 1957 while at the Carnegie Institute of Technology, focusing on applications.00113-4) Holt's internal report, Forecasting Seasonals and Trends by Exponentially Weighted Moving Averages, introduced a linear trend component to capture systematic changes in data, forming the basis of what is now called Holt's linear trend method.00113-4) This innovation addressed limitations in simple smoothing for series exhibiting growth or decline, and though initially unpublished, it was later recognized as a foundational contribution to . Building on Holt's framework, Peter R. Winters, one of Holt's students, proposed an extension in 1960 to incorporate seasonal patterns, resulting in the triple exponential smoothing method, widely known as the Holt-Winters approach. Published in under the title Forecasting Sales by Exponentially Weighted Moving Averages, Winters' paper demonstrated the method's effectiveness for sales data with both trend and seasonality, using separate smoothing parameters for level, trend, and seasonal components. This development marked a significant advancement, enabling more robust forecasts for cyclical business data. The popularization of exponential smoothing accelerated in the late and through Brown's book and the growing field of , where the methods proved valuable for practical decision-making in inventory and . By the 1970s and 1980s, these techniques were increasingly integrated into statistical software, such as early versions of and , which automated parameter estimation and , broadening their accessibility beyond manual calculations. A key refinement during this period came from Everette S. Gardner in 1985, who introduced damped trend exponential smoothing to mitigate the tendency of linear trends to produce unrealistic long-horizon forecasts, as detailed in his review Exponential Smoothing: The State of the Art. This modification enhanced the method's reliability for diverse applications, solidifying its role in modern .

Simple Exponential Smoothing

Model Formulation

Simple exponential smoothing provides a for estimating the level of a stationary through a recursive updating mechanism. The core model is given by the equation \hat{y}_{t+1|t} = \alpha y_t + (1 - \alpha) \hat{y}_{t|t-1}, where \hat{y}_{t+1|t} is the one-step-ahead forecast for period t+1 made at time t, y_t is the observed value at time t, and \alpha is the smoothing parameter with $0 < \alpha \leq 1. This formulation, introduced by Robert G. Brown in his seminal work on , enables efficient computation by requiring only the most recent observation and prior forecast. The recursive structure of the equation interprets each forecast as a weighted average between the newly observed value and the previous forecast, with weights \alpha and $1 - \alpha, respectively. As a result, the contribution of historical observations diminishes exponentially over time, reflecting a that favors recent while incorporating all past . This decaying influence arises naturally from repeated application of the . An equivalent representation unfolds the recursion into an infinite-order moving average: \hat{y}_{t+1|t} = \sum_{k=0}^{\infty} \alpha (1-\alpha)^k y_{t-k}, which explicitly shows the exponentially decreasing weights \alpha (1-\alpha)^k assigned to past observations y_{t-k}, assuming the smoothing process has been active indefinitely. This form underscores the model's connection to exponentially weighted averages and its suitability for level estimation in stable series. Forecast errors in the model are captured by the one-step residuals e_t = y_t - \hat{y}_{t|t-1}, representing the discrepancies between actual observations and prior predictions. These residuals facilitate of the smoothing process's accuracy at each step.

Parameter Estimation and Optimization

In exponential , the \alpha, where $0 < \alpha < 1, determines the weight given to the most recent observation relative to the previous smoothed value, thereby controlling the model's responsiveness to new . A higher \alpha emphasizes recent observations, making the forecasts more reactive to short-term fluctuations but potentially noisier, while a lower \alpha prioritizes historical for greater and smoother forecasts, which is particularly useful in reducing the impact of outliers or irregular variations. The initial smoothed value, often denoted as \hat{y}_{1|0}, serves as the starting point for the recursive smoothing process and significantly influences early forecasts. Common options include setting it equal to the first observation (\hat{y}_{1|0} = y_1), which assumes the initial data point is representative, or computing the average of the first few observations to mitigate the effect of potential anomalies in the starting value. Alternatively, the initial value can be optimized jointly with \alpha through , where hypothetical pre-sample data are generated to extend the series backward and minimize errors across the entire dataset. Parameter estimation typically involves minimizing a forecast error criterion, such as the (MSE) or the sum of squared one-step-ahead errors \sum_{t=2}^T e_t^2, where e_t = y_t - \hat{y}_{t|t-1} represents the one-step prediction error. This optimization is nonlinear due to the recursive nature of the model and can be performed using numerical methods like or simpler grid searches over the \alpha range, often implemented in statistical software. To ensure generalizability and avoid , practitioners commonly use hold-out samples—reserving a portion of the data for validation—while fitting on the training set. In practice, for monthly without trend or , \alpha values between 0.1 and 0.3 are typical, balancing responsiveness and stability as recommended in empirical studies and applications. These ranges help prevent over-reaction to transient shocks while maintaining reasonable forecast accuracy, though the exact value should be selected based on characteristics and minimization results.

Properties and Interpretations

Simple exponential smoothing exhibits several key properties that underpin its utility in stationary time series. A central characteristic is the effective memory length, quantified by the τ, which measures the number of past observations that substantially contribute to the current smoothed value. This is expressed as \tau \approx \frac{1}{\alpha}, indicating that lower α values extend the model's over more historical data points. The "exponential" smoothing arises from the geometric of weights assigned to past observations in the forecast formulation. Specifically, the one-step-ahead forecast can be rewritten as a weighted sum \hat{y}_{t+1|t} = \sum_{k=0}^{\infty} \alpha (1 - \alpha)^k y_{t-k}, where the weights \alpha (1 - \alpha)^k decrease geometrically as k increases, rather than following a continuous e^{-\lambda k}. This ensures that recent observations receive higher emphasis while still incorporating all prior data, albeit with diminishing influence. In terms of statistical properties, the choice of α involves a fundamental . Larger values of α (closer to 1) reduce by making the model more responsive to recent changes in the level, thereby minimizing systematic forecast errors in dynamic environments. However, this increases variance by amplifying the impact of in recent observations, leading to more volatile forecasts. Conversely, smaller α values enhance , lowering variance at the cost of higher through slower to true level shifts. The optimal α thus balances these competing effects to minimize mean squared forecast error for the given data characteristics. Compared to a simple , which applies equal weights over a fixed window and discards older data, exponential smoothing allocates progressively lower weights to distant observations without a predefined , resulting in a more flexible "infinite window" that reduces forecast when the level changes. This adaptability stems from the recursive , allowing continuous updates without recomputation of the entire . Furthermore, simple exponential smoothing is mathematically equivalent to the integrated moving average component of an (0,1,1) process, where the moving average parameter θ = 1 - α governs the smoothing of innovations. Despite these strengths, simple exponential smoothing has notable limitations when applied to non-stationary data. The model assumes a constant underlying level, producing forecasts that remain flat and fail to capture trends, leading to persistent and poor performance in series exhibiting systematic upward or downward movements over time. In such cases, the smoothed estimates behind actual values, accumulating errors that undermine accuracy.

Extensions for Trend and Seasonality

Holt's Linear Trend Method

Holt's linear trend method, also referred to as double exponential smoothing, extends the simple exponential smoothing framework by incorporating a trend component to model non-stationary data exhibiting a linear trend. Developed by Charles Holt, this approach estimates both the underlying level and the slope of the trend, allowing for forecasts that account for ongoing changes in the series rather than assuming stationarity. It is particularly effective for data where the trend is relatively constant over time, providing a parsimonious way to capture directional movement without assuming more complex dynamics. The method relies on two recursive equations to update the estimates of level l_t and trend b_t at time t. The level is updated as a weighted of the current and the previous level plus trend: l_t = \alpha y_t + (1 - \alpha)(l_{t-1} + b_{t-1}) where \alpha (0 < \alpha < 1) is the for the level, controlling the weight given to the most recent . The trend is then updated based on the change in the level and the previous trend estimate: b_t = \beta (l_t - l_{t-1}) + (1 - \beta) b_{t-1} with \beta (0 < \beta < 1) as the trend smoothing parameter, which determines how quickly the trend adapts to changes in the level slope. The h-step-ahead forecast from time t is given by a linear projection: \hat{y}_{t+h|t} = l_t + h b_t This formulation yields straight-line forecasts that extend the current level and trend indefinitely, making it suitable for series with a persistent linear slope but potentially less accurate for horizons where the trend may accelerate or decelerate. Initialization of the level and trend components can be done heuristically, such as setting the initial level l_1 to the first y_1 and the initial trend b_1 to the between the second and first observations y_2 - y_1, or through more robust methods like averaging early differences for the trend. Alternatively, initial values may be treated as additional parameters to be optimized. The parameters \alpha and \beta, along with initials if applicable, are typically selected jointly to minimize the (MSE) of one-step-ahead in-sample forecast residuals, often using nonlinear optimization techniques. This error minimization ensures the model fits the historical data while balancing responsiveness to recent changes against over-reaction to noise.

Holt-Winters Seasonal Method

The Holt-Winters seasonal method, also referred to as triple exponential smoothing, extends the double exponential smoothing approach of Holt's linear trend method by incorporating a seasonal component to model data exhibiting level, trend, and periodic . Introduced by Peter R. Winters in , this method is particularly suited for forecasting in domains such as and where seasonal patterns recur with a known fixed period, such as monthly or quarterly cycles. It maintains three interdependent state variables—level l_t, trend b_t, and seasonal factor s_t—updated recursively using smoothing parameters to balance responsiveness to recent observations with stability from historical estimates. In the additive version of the Holt-Winters method, suitable for series where seasonal variations remain roughly constant over time regardless of the overall level, the h-step-ahead forecast from time t is given by \hat{y}_{t+h|t} = l_t + h b_t + s_{t+h-m}, where m denotes the known seasonal period (e.g., m=12 for monthly ). For the multiplicative version, appropriate when seasonal fluctuations are proportional to the level of the series (e.g., changes), the forecast equation becomes \hat{y}_{t+h|t} = (l_t + h b_t) s_{t+h-m}. The multiplicative form is preferred for positive-valued series with increasing amplitude in seasonality, as it prevents forecasts from becoming negative or overly volatile in low-level periods. The choice between additive and multiplicative seasonality is determined by examining the historical data: additive if the seasonal effect is stable in absolute terms, and multiplicative if it scales with the trend or level. The updating equations for the components in the additive case proceed sequentially. The level update desmooths the observation by subtracting the previous seasonal factor before applying the smoothing parameter α (0 < α < 1), which weights the current deseasonalized value against the prior level-plus-trend estimate: l_t = \alpha (y_t - s_{t-m}) + (1 - \alpha)(l_{t-1} + b_{t-1}). The trend update, analogous to that in Holt's method, uses β (0 < β < 1) to smooth the change in level against the previous trend: b_t = \beta (l_t - l_{t-1}) + (1 - \beta) b_{t-1}. Finally, the seasonal factor update employs γ (0 < γ < 1) to weight the current residual (observation minus updated level) against the seasonal factor from the prior corresponding period: s_t = \gamma (y_t - l_t) + (1 - \gamma) s_{t-m}. For the multiplicative case, the updates replace subtractions with divisions: the level becomes l_t = \alpha \frac{y_t}{s_{t-m}} + (1 - \alpha)(l_{t-1} + b_{t-1}), the trend remains unchanged, and the seasonal factor is s_t = \gamma \frac{y_t}{l_t} + (1 - \gamma) s_{t-m}. The parameter γ specifically governs the smoothing of the seasonal component, with higher values emphasizing recent seasonal deviations and lower values relying more on historical patterns. Initialization of the components is crucial for accurate short-term forecasts and is typically based on the first m observations to estimate the initial seasonal factors as the average deviations or ratios from the overall , yielding deseasonalized averages for l_0 and an initial trend b_0 often set to zero or derived from the first two deseasonalized values. The seasonal period m must be known in advance and fixed, assuming consistent cyclicity without shifts in timing or duration. Parameters α, β, and γ are estimated by minimizing the one-step-ahead (MSE) over the in-sample data, often via nonlinear optimization techniques, as the recursive nature precludes closed-form solutions. Despite its simplicity and effectiveness for stable seasonal series, the Holt-Winters method has limitations, including its assumption of a constant seasonal form and period, which can lead to poor performance if patterns evolve, structural changes occur, or the seasonality interacts nonlinearly with trend.

Applications and Implementations

Practical Uses in Forecasting

Exponential smoothing finds extensive application in , where it facilitates by smoothing out irregularities in historical sales data to predict future needs, thereby optimizing levels and reducing holding costs. In , it is employed for short-term predictions, leveraging its ability to emphasize recent price movements while discounting older data, which helps traders anticipate market fluctuations. Economists apply exponential smoothing to short-term (GDP) forecasting, as its simplicity allows for quick adjustments to emerging economic indicators without requiring complex structural assumptions. In the sector, the method supports load prediction by capturing daily and weekly patterns in consumption data, enabling utilities to balance efficiently, and has been extended to photovoltaic power forecasting using modified exponential smoothing techniques as of 2025. A seminal case of its practical deployment occurred during , when Robert G. Brown developed exponential smoothing for the U.S. Navy to forecast spare parts demand in , improving inventory efficiency amid uncertain wartime supplies. Exponential smoothing has long been used in for sales forecasting to support production and . To extend its capabilities, exponential smoothing is often integrated with models in hybrid approaches, combining the former's responsiveness to recent data with the latter's strength in capturing longer-term dependencies for improved accuracy over extended horizons. Similarly, it pairs with techniques for , where smoothing preprocesses to identify relevant predictors, enhancing overall model performance in complex forecasting tasks. Forecast accuracy in these applications is typically assessed using metrics such as (MAPE), which quantifies relative errors in percentage terms, and (MASE), which scales errors against a naive to enable comparisons across datasets; these outperform (MSE) by being less sensitive to outliers and more interpretable in operational contexts. Exponential smoothing proves particularly suitable for intermittent demand patterns, where data features sporadic non-zero observations interspersed with zeros, and for short-term horizons, as its parsimonious structure prioritizes recency and simplicity over intricate modeling when data is limited or volatile.

Software and Computational Tools

Exponential smoothing implementations are available across various programming languages and software environments, facilitating both simple and advanced tasks. In , the forecast package provides the ets() function, which automates the selection of exponential smoothing models, including simple exponential smoothing, Holt's linear trend method, and Holt-Winters seasonal method, while also handling additive and multiplicative error types through state space modeling. This function optimizes smoothing parameters using and supports based on information criteria like AIC. In , the statsmodels library offers the ExponentialSmoothing class within its module, implementing a full range of Holt-Winters models with options for additive or multiplicative trends and , as well as damping components. Parameter estimation is performed via optimization methods such as L-BFGS-B, allowing users to specify initial values or use automatic heuristics. For approaches combining exponential with models, libraries like pmdarima extend auto-ARIMA functionality to support seasonal components that can integrate with smoothing techniques in ensemble . Additionally, the sktime toolkit, developed post-2020, unifies by interfacing with statsmodels' ExponentialSmoothing, enabling seamless model composition, cross-validation, and scalability in pipelines. Spreadsheet software like includes the built-in FORECAST.ETS function, which applies Holt-Winters exponential smoothing with automatic detection of seasonality and trend, suitable for univariate directly within worksheets. For enterprise environments, provides PROC ESM in its and module, supporting optimized exponential smoothing models across multiple , including damped trends and seasonal adjustments, with output for forecasts, residuals, and diagnostics. In , the Toolbox offers functions for general signal smoothing, such as exponential weighting via custom filters, though dedicated exponential smoothing often requires implementation using the Toolbox or user-defined scripts for Holt-Winters variants. Open-source distributed computing frameworks like Spark's MLlib enable scalable processing, where exponential smoothing can be applied through user-defined transformations or integrated with libraries like PySpark for large-scale . Computational considerations in these tools include handling , which typically requires preprocessing via imputation (e.g., or forward-fill) before applying exponential smoothing, as direct support varies and no universal standard exists across implementations. Automatic selection of smoothing parameters (α for level, β for trend, γ for ) is commonly achieved through minimization of AIC or related criteria, promoting parsimonious models that balance fit and complexity, as implemented in R's ets() and statsmodels' optimization routines. For large datasets, tools like Python's sktime and MLlib offer advantages in , supporting and vectorized operations to handle millions of observations without significant performance degradation.

References

  1. [1]
    Time Series Analysis Using Different Forecast Methods and Case ...
    Simple exponential smoothing (SES) is a time‐series forecasting method for univariate data without trend or seasonality. It requires a single parameter, called ...
  2. [2]
    [PDF] 2 Getting started - Rob J Hyndman
    Holt (1957)3 extended simple exponential smoothing to linear exponential smoothing to allow forecasting of data with trends. The forecast for Holt's linear ...
  3. [3]
    Forecasting carbon emissions due to electricity power generation in ...
    The origin of exponential smoothing can be traced back to World War II. Following this regime, Robert Brown developed a tracking model to track fire control ...
  4. [4]
    [PDF] Exponential Smoothing: The State of the Art
    This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a.
  5. [5]
    [PDF] Exponential smoothing: The state of the art—Part II
    His second book,. Smoothing, Forecasting, and Prediction of Discrete. Time Series (Brown, 1963), developed the general exponential smoothing methodology. In ...
  6. [6]
    Exponential smoothing: The state of the art - Wiley Online Library
    This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a.
  7. [7]
    Chapter 7 Exponential smoothing | Forecasting - OTexts
    Forecasts produced using exponential smoothing methods are weighted averages of past observations, with the weights decaying exponentially as the observations ...Missing: definition | Show results with:definition
  8. [8]
    Statistical forecasting for inventory control | Semantic Scholar
    This book explains how to design an economical, efficient inventory-control system through better routine short-range forecasting, with suggestions for ...
  9. [9]
    7.1 Simple exponential smoothing | Forecasting - OTexts
    Simple exponential smoothing (SES) is a forecasting method for data without clear trends or seasonality, using weighted averages with exponentially decreasing ...Missing: authoritative sources
  10. [10]
    [PDF] Determining The Optimal Values Of Exponential Smoothing Constants
    We examine the impact of initial forecasts on the smoothing constants and the idea of optimizing the initial forecast along with the smoothing constants. We.<|control11|><|separator|>
  11. [11]
    [PDF] A state space framework for automatic forecasting using exponential ...
    This paper presents a new automatic forecasting approach using exponential smoothing methods, equivalent to state space models, for automatic forecasting.
  12. [12]
    6.4.3.1. Single Exponential Smoothing
    Exponential smoothing weights past observations with exponentially decreasing weights to forecast future values, This smoothing scheme begins by setting S 2 ...Missing: estimation | Show results with:estimation
  13. [13]
    7.2 Trend methods | Forecasting: Principles and Practice (2nd ed)
    Holt (1957) extended simple exponential smoothing to allow the forecasting of data with a trend. This method involves a forecast equation and two smoothing ...Missing: original paper
  14. [14]
    7.3 Holt-Winters' seasonal method | Forecasting - OTexts
    one for the level ℓt ℓ t , one for the trend bt b t , and one ...Missing: Peter | Show results with:Peter
  15. [15]
    Forecasting for inventory control with exponential smoothing
    The conceptualisation of exponential smoothing (Holt, 1957, Brown, 1959 ... Statistical forecasting for inventory control. (1959). R.G. Brown. Decision ...
  16. [16]
    Forecasting Stock Prices with an Integrated Approach Combining ...
    Exponential smoothing is a widely used time series forecasting method with significant applicability in finance, particularly for stock price prediction. Its ...
  17. [17]
    (PDF) The Application of Exponential Smoothing in GDP Forecasting
    The exponential smoothing method, also known as an exponentially weighted average method, is a forecasting method with the advantages of a clear process and ...
  18. [18]
    A Study on Exponential Smoothing Model for Load Forecasting
    Exponential smoothing model is one of the main load forecasting models of power systems, the accuracy of the model depends on smoothing coefficient.Missing: prediction | Show results with:prediction
  19. [19]
    Sales Forecasting Using Exponential Smoothing
    The purpose of this thesis is to show that exponential smoothing as a sales forecasting device and as a device to predict demand for production and inventory ...
  20. [20]
    A hybrid approach to time series forecasting: Integrating ARIMA and ...
    This hybrid model enhances forecast accuracy by leveraging ARIMA's ability to capture linear dependencies and short-term fluctuations.
  21. [21]
    (PDF) Forecasting Feature Selection based on Single Exponential ...
    Aug 7, 2025 · Feature selection plays a pivotal role in machine learning, aiming to reduce data dimensionality and enhance classification accuracy by ...
  22. [22]
    [PDF] ANOTHER LOOK AT FORECAST-ACCURACY METRICS FOR ...
    He also introduces a new metric—the mean absolute scaled error. (MASE)—which is more appropriate for intermittent-demand data. More generally, he believes that ...
  23. [23]
    Forecasting intermittent demand by hyperbolic-exponential smoothing
    Croston's method is generally viewed as being superior to exponential smoothing when the demand is intermittent, but it has the drawbacks of bias and an ...
  24. [24]
    Forecasting Functions for Time Series and Linear Models
    The R package forecast provides methods and tools for displaying and analysing univariate time series forecasts including exponential smoothing via state space ...Missing: documentation | Show results with:documentation
  25. [25]
    ets Exponential smoothing state space model - RDocumentation
    A state space framework for automatic forecasting using exponential smoothing methods. International J. Forecasting, 18(3), 439--454.
  26. [26]
    statsmodels.tsa.holtwinters.ExponentialSmoothing
    This is a full implementation of the holt winters exponential smoothing as per [1]. This includes all the unstable methods as well as the stable methods.
  27. [27]
  28. [28]
    ExponentialSmoothing — sktime documentation
    The alpha value of the simple exponential smoothing, if the value is set then this value will be used as the value. initial_trendfloat or None, default=None.<|control11|><|separator|>
  29. [29]
    FORECAST.ETS function - Microsoft Support
    FORECAST.ETS predicts a future value using historical data and the ETS algorithm, requiring a constant step between timeline points.
  30. [30]
    Signal Smoothing - MATLAB & Simulink Example - MathWorks
    The goal of smoothing is to produce slow changes in value so that it"s easier to see trends in our data. Sometimes when you examine input data you may want to ...Motivation · Weighted Moving Average Filters · Savitzky-Golay Filters · Resampling
  31. [31]
    Classification and regression - Spark 4.0.1 Documentation
    This page covers algorithms for Classification and Regression. It also includes sections discussing specific classes of algorithms, such as linear methods, ...Logistic regression · Naive Bayes · Generalized linear regression
  32. [32]
    Dealing with missing data in an exponential smoothing model
    Jul 23, 2013 · There does not seem to be a standard way to deal with missing data in the context of the exponential smoothing family of models.Selecting between exponential smoothing models: MAPE or AIC?Algorithms for automatic model selection - Cross ValidatedMore results from stats.stackexchange.com
  33. [33]
    7.6 Estimation and model selection | Forecasting - OTexts
    The models can be estimated in R using the ets() function in the forecast package. ... Forecasting with exponential smoothing: The state space approach.Missing: documentation | Show results with:documentation