Fact-checked by Grok 2 weeks ago

Symmetric mean absolute percentage error

The symmetric mean absolute percentage error (SMAPE), also denoted as sMAPE, is a scale-independent metric used to assess the accuracy of predictive models, especially in time series forecasting, by averaging the relative errors between actual and forecasted values in a manner that treats over- and under-predictions symmetrically. It addresses limitations of the traditional , such as asymmetry where equal-magnitude errors above the actual value yield larger percentages than those below, by normalizing each absolute error against the average of the absolute actual and forecasted values. The standard formula is: \text{SMAPE} = \frac{100}{n} \sum_{i=1}^{n} \frac{2 |A_i - F_i|}{|A_i| + |F_i|} where n is the number of observations, A_i is the actual value, and F_i is the forecasted value for the i-th period; this yields values ranging from 0% (perfect accuracy) to 200% (complete reversal). First proposed by J. Scott Armstrong in 1985 and popularized by Spyros Makridakis in 1993 as part of the M2 forecasting competition, SMAPE was intended to provide a more balanced alternative to MAPE, which can be biased toward under-forecasting and problematic when actual values are near zero. It gained prominence through its adoption in the M-competitions, a series of benchmarking studies on time series methods, where it helped rank model performance across diverse datasets while mitigating MAPE's sensitivity to low actual values. Despite its symmetry intent, subsequent analysis revealed residual asymmetries, particularly with large errors or when forecasts differ significantly in sign or magnitude. SMAPE's advantages include interpretability as a percentage, robustness to scale differences between series, and reduced bias in comparing methods across varying data ranges, making it suitable for business and economic forecasting applications. However, it remains undefined when both actual and forecast are zero, can produce counterintuitive results (e.g., over-forecasting a zero actual yielding 200% error), and may not align well with optimization objectives in model training due to its non-differentiable nature. Alternatives such as the mean absolute scaled error (MASE) have been suggested for broader applicability, but SMAPE continues to be widely used in software libraries and industry standards for its simplicity and relative error focus.

Definition and Motivation

Definition

The symmetric mean absolute percentage error (SMAPE) is a relative designed to evaluate the accuracy of forecasts by averaging the proportional deviations between predicted and actual values across a . It serves as a percentage-based measure that normalizes errors relative to the scale of the observations, allowing for comparisons across different magnitudes of data without being influenced by absolute units. Unlike traditional absolute , SMAPE emphasizes relative performance, making it particularly suitable for assessing predictive models in scenarios where the importance of errors scales with the size of the values involved. At its core, SMAPE achieves by treating over-predictions and under-predictions equivalently, incorporating absolute differences in a way that balances the influence of both actual and forecasted values in the error calculation. This approach ensures that the metric does not favor of error over the other, providing a fair assessment of model reliability regardless of whether predictions tend to overestimate or underestimate. For instance, in a basic task such as predicting monthly sales, if the actual sales are moderately higher than the forecast, SMAPE captures this deviation as a proportion tied to the combined scale of both figures, highlighting the error's impact in intuitive terms. A key attribute of SMAPE is its scale-independence, which means it remains consistent even when data units change, such as from dollars to thousands of dollars, while being expressed directly as a percentage for straightforward interpretation. This feature enhances its accessibility for non-technical stakeholders, as a lower SMAPE value—closer to zero—universally signals better forecast precision without requiring additional context.

Motivation

The mean absolute percentage error (MAPE), a widely used metric in forecasting, suffers from significant limitations that undermine its reliability in certain scenarios. One critical issue is its susceptibility to infinite values when actual values are zero, as the denominator in the error calculation becomes zero, rendering the metric undefined and preventing meaningful comparisons across datasets. Additionally, MAPE exhibits asymmetry in penalizing over-forecasts and under-forecasts; for instance, an absolute error of 50 units yields a 50% error when the actual value is 100 and the forecast is 150, but only approximately 33.3% when the actual is 150 and the forecast is 100, thereby biasing evaluations toward under-forecasting by imposing lighter relative penalties on such errors. These flaws are particularly problematic in time series forecasting, where data often include zeros or near-zero values, such as in intermittent demand scenarios, leading to distorted accuracy assessments and unreliable model selections. To address these shortcomings, the symmetric mean absolute percentage error (SMAPE) was developed to introduce balance in . By employing a denominator that reflects the combined magnitude of actual and forecast values, SMAPE ensures treatment of positive and negative errors of equivalent scale, eliminating the bias inherent in MAPE and providing equitable penalties regardless of whether the forecast over- or underestimates the actual. This arises from a system-motivated perspective on accuracy, where the derives from the minimal relative needed for two values to approximate a common point, promoting fairness in evaluations. Conceptually, SMAPE's design makes it well-suited for volatile or sparse datasets common in applications, as it avoids breakdowns from or near-zero actuals while maintaining scale-invariance for cross-series comparisons. Originating from the needs identified in empirical competitions, such as the M-competitions, SMAPE facilitates more robust quality assessments in domains with irregular patterns, like or economic indicators, where traditional metrics falter.

Mathematical Formulation

Formula

The symmetric mean absolute percentage error (SMAPE) is given by the formula \text{SMAPE} = \frac{100}{n} \sum_{t=1}^{n} \frac{2 |a_t - f_t|}{|a_t| + |f_t|} where a_t denotes the actual (observed) value at time t, f_t denotes the forecasted value at time t, and n is the number of observations. This expression was proposed by Makridakis to address asymmetries in traditional percentage error measures. To compute SMAPE, first calculate the symmetric relative error for each observation as \frac{2 |a_t - f_t|}{|a_t| + |f_t|}, which normalizes the by the sum of the values of the actual and . Sum these relative errors across all n observations, divide by n to obtain the mean, and multiply by 100 to express the result as a . The factor of 2 in the numerator ensures the metric scales symmetrically, yielding a range from 0% (perfect forecasts) to 200% (complete reversal of values). The use of values in the denominator maintains positivity and avoids division issues even when actual or forecast values are negative. A variant appears in some implementations, omitting the factor of 2 and using \text{SMAPE} = \frac{100}{n} \sum_{t=1}^{n} \frac{|a_t - f_t|}{|a_t| + |f_t|} which produces values in the 0% to 100% range; the version with the factor of 2, as originally specified by Makridakis, is considered the primary form.

Interpretation

SMAPE values range from 0%, corresponding to perfect alignment between forecasted and actual values, to a theoretical maximum of 200%, which arises when the forecast is the exact opposite of the actual value (e.g., one is zero while the other is positive). In practical scenarios, values exceeding 100% are uncommon, and lower SMAPE indicates superior predictive performance. Assessing whether an SMAPE value represents good or poor performance is inherently context-dependent, influenced by factors such as data volatility, forecasting horizon, and domain-specific characteristics. For instance, in the M4 forecasting competition involving diverse including economic and business data, benchmark statistical methods like single yielded an average SMAPE of approximately 13%, while leading hybrid approaches achieved around 11.4%, demonstrating strong performance across 100,000 series. A specific SMAPE value, such as 15%, signifies that the average between actual and forecasted values equates to 15% of the () between them, offering a relative measure scaled to the combined magnitude of observations and predictions. This facilitates intuitive understanding: errors are expressed as percentages of a central reference point rather than raw differences. Unlike scale-dependent metrics such as , SMAPE inherently normalizes errors relative to the data's magnitude through its symmetric denominator, enabling fair comparisons of forecast accuracy across datasets varying widely in units or sizes, such as monthly sales figures versus annual revenues. In , for example, an SMAPE of 5% would signal exceptionally high reliability, supporting precise inventory planning and by minimizing deviations relative to typical sales scales in stable markets.

Properties

Advantages

The symmetric mean absolute percentage error (SMAPE) provides equitable treatment of over- and under-predictions by penalizing errors of equal magnitude in opposite directions with the same percentage value, thereby mitigating the bias inherent in asymmetric metrics like the (MAPE) that disproportionately penalize over-forecasts. This symmetry promotes unbiased model selection and comparison in tasks, as it ensures that deviations above or below the actual value are evaluated on a consistent relative scale. SMAPE avoids infinite errors that arise in MAPE when actual observations are zero but the forecast is not zero, by utilizing the of the actual and forecasted values in the denominator. Though it becomes if both are zero, which may require handling in datasets with frequent zero pairs like intermittent demand, this feature enhances its applicability in sparse data compared to MAPE. As a scale-invariant , SMAPE yields comparable results across datasets with varying units or magnitudes, allowing for meaningful aggregation and in heterogeneous environments such as multi-product systems or cross-industry analyses. The percentage-based output of SMAPE facilitates intuitive and straightforward communication to stakeholders, enabling quick assessment of quality without needing specialized statistical knowledge. Empirical evaluations in competitions, including the M3 competition involving diverse with intermittent patterns, highlight SMAPE's effectiveness in such contexts, where it reliably ranks methods and supports accurate performance insights amid prevalent zeros.

Limitations

One significant limitation of the symmetric mean absolute percentage error (SMAPE) arises when any actual value and its corresponding forecast value are both zero, rendering the metric undefined due to division by zero in that term of the summation. This issue is particularly relevant for datasets like intermittent demand series with no sales. SMAPE also tends to over-penalize errors involving small non-zero values by inflating the percentage error, as the denominator becomes very small when both actual and forecast values are near zero, even if the absolute difference is minimal. For instance, in time series with low-volume periods, a forecast slightly off from a small actual value can yield disproportionately high SMAPE scores, leading to unstable evaluations and potential bias against models performing reasonably in absolute terms. This instability is particularly problematic in domains like intermittent demand forecasting, where small or sporadic values are common. Additionally, SMAPE can yield counterintuitive high errors (approaching 200%) when forecasting zero for small positive actuals, contributing to instability in evaluations of models on low-volume or sporadic data. The metric is bounded above by 200%, achieved when forecasts are equal in magnitude but opposite in to the actual values, which restricts its expressiveness for extremely poor forecasts compared to unbounded alternatives that can reflect scales beyond this cap. In high-volatility data, such as prices, SMAPE may undervalue models that capture magnitude well but perform poorly on , as it ignores error signs and focuses solely on absolute differences. Consequently, SMAPE is not ideal for assessing directional accuracy and should be paired with sign-based metrics, such as mean directional accuracy, to provide a more complete evaluation in such scenarios.

Applications

Forecasting

In demand forecasting, particularly within and , the symmetric mean absolute percentage error (SMAPE) serves as a key metric for evaluating models, especially where intermittent patterns—characterized by frequent zero-demand periods—render traditional metrics like MAPE unreliable due to division-by-zero issues. SMAPE's symmetric formulation mitigates this by using the average of actual and forecasted values in the denominator, providing a more stable assessment of forecast accuracy for lumpy or sporadic demand series common in environments. For time series evaluation, SMAPE is widely applied to compare predictions from models such as and approaches against holdout data, offering a scale-independent measure that highlights relative errors in predictive performance. In volatile series, such as stock price forecasting, SMAPE quantifies accuracy by balancing over- and under-predictions, as demonstrated in evaluations of models for one-day-ahead predictions on indices like BIST30, where it reveals improvements in handling market fluctuations compared to absolute error metrics. A prominent application of SMAPE occurs in the M-competitions, benchmark challenges organized by Spyros Makridakis, where it was adopted starting from the M3 competition to ensure a fair, percentage-based evaluation across diverse with varying scales and frequencies. In the M4 competition, involving 100,000 series, SMAPE was one of two primary metrics (alongside ) for ranking methods, selected for its interpretability and ability to compare statistical and forecasts equitably without bias toward series magnitude. Best practices for using SMAPE in emphasize aggregating the across multiple horizons in multi-step predictions to capture overall , as implemented in the M4 competition through unweighted averaging of errors over all forecast steps and series. For scenarios with varying forecast importance, such as longer-term predictions in supply chains, weighting SMAPE by horizon length or recency can prioritize near-term accuracy, though equal aggregation remains standard for benchmark comparisons to maintain consistency.

Other Domains

In , SMAPE serves as a robust metric for evaluating model fit in scenarios involving positive or zero-valued outputs, such as predicting economic indicators like GDP growth or rates, where relative errors provide interpretable insights into model performance. Unlike asymmetric measures, its symmetric formulation helps mitigate bias in assessments of regression models that may encounter near-zero values. In , SMAPE is employed during hyperparameter tuning to optimize algorithms like random forests or machines applied to datasets with or predictions, ensuring relative error minimization across varying scales. This application allows practitioners to balance model complexity and accuracy by selecting hyperparameters that yield low SMAPE scores on validation sets, particularly useful for non-stationary data patterns. A specific example of SMAPE's utility arises in evaluating models for medical , such as those detecting spinal deformities in scans. In adolescent idiopathic assessment, deep learning-based segmentation (e.g., using YOLACT or architectures) identifies vertebral landmarks to compute Cobb angles, with SMAPE quantifying the relative error between predicted and ground-truth angles—achieving values as low as 10.76% on datasets of over 600 images. This is preferred for count-like or bounded outputs in segmentation tasks where absolute differences alone overlook proportional inaccuracies. SMAPE finds unique adaptation in for intermittent demand scenarios, where it evaluates predictive models combined with techniques to test scenarios under sporadic ordering patterns, as seen in spare parts or pharmaceutical . Its ability to handle zeros without undefined values supports reliable simulations for lumpy demand profiles. An emerging application of SMAPE lies in AI ethics, particularly for assessing in predictive fairness metrics within algorithmic systems. In frameworks addressing fairness in , SMAPE measures distortion between original and debiased outputs, preserving utility while minimizing bias amplification across demographic groups. This usage ensures equitable performance evaluations in high-stakes domains like hiring or lending algorithms.

Alternatives and Comparisons

Mean Absolute Percentage Error

The mean absolute percentage error (MAPE) is a widely used metric for evaluating the accuracy of forecasting models, particularly in time series analysis. It quantifies the average magnitude of errors as a percentage of the actual values, providing an intuitive scale for comparison across different datasets. The for MAPE is given by \text{MAPE} = \frac{100}{n} \sum_{t=1}^{n} \frac{|A_t - F_t|}{|A_t|} where A_t represents the actual value at time t, F_t is the forecasted value, and n is the number of observations. Despite its popularity, MAPE exhibits significant flaws that limit its reliability in certain contexts. A primary issue is its asymmetry in penalizing over-forecasts and under-forecasts: for instance, if the actual value is 100 and the forecast is 200, the error is |100 - 200| / 100 = 100\%, but if the actual value is 200 and the forecast is 100, the error is |200 - 100| / 200 = 50\%. This encourages conservative (under-)forecasting to minimize apparent errors. Additionally, MAPE is undefined when any actual value A_t = 0, as occurs, which is problematic for datasets involving intermittent demand or non-positive values. Historically, MAPE gained prominence in evaluations prior to the 1990s, serving as the primary accuracy measure in seminal works such as the Makridakis M1-competition of 1982, where it was applied to assess various methods across hundreds of datasets. However, its flaws prompted critiques and the search for alternatives. MAPE is suitable only for datasets with strictly positive actual values and stable scales, avoiding zeros or highly variable magnitudes that exacerbate its limitations. In such scenarios, it remains a straightforward indicator of relative . As a symmetric alternative, SMAPE addresses MAPE's bias by balancing the treatment of over- and under-forecasts.

Other Accuracy Metrics

The mean absolute scaled error (MASE) serves as a robust alternative to SMAPE by scaling forecast errors relative to the in-sample mean absolute error of a naive forecast, making it particularly suitable for comparing accuracy across non-stationary time series without the biases inherent in percentage-based metrics. Introduced by Hyndman and Koehler, MASE is defined such that a value of 1 indicates the forecast performs as well as the naive benchmark, while values below 1 signify superior accuracy; its scale-invariance ensures reliability in datasets with trends or varying magnitudes. Another option, the median relative absolute error (MdRAE), enhances outlier resistance by taking the median of relative absolute errors compared to a benchmark forecast, proving effective for skewed distributions where mean-based measures like SMAPE may distort results. Developed by Armstrong and Collopy, MdRAE is recommended for method selection with limited series, as it mitigates the impact of extreme values and provides a stable relative assessment. The weighted mean absolute percentage error (WMAPE) addresses SMAPE's equal treatment of errors by weighting them proportionally to actual values, thereby emphasizing accuracy on larger observations critical in domains like inventory management. Proposed by Kolassa and Schütz as the /mean ratio, WMAPE avoids division-by-zero issues in low-volume scenarios and aligns better with impacts, such as cost implications from errors in high-demand items. In comparisons, MASE circumvents percentage-based pitfalls entirely, making it preferable for stationary series where SMAPE can unduly penalize errors near zero values. In the M4 forecasting competition involving over 100,000 time series, both MASE and SMAPE were used as accuracy measures, with an overall weighted average (OWA) of the two employed for final rankings.

History

Origin

The symmetric mean absolute percentage error (SMAPE) was proposed by Spyros Makridakis in 1993 as a refined accuracy measure for evaluation. It emerged during his re-analysis of results from the inaugural M-Competition, a 1982 benchmark that compared methods across 1001 to identify practical performers in real-world scenarios. In this original context, SMAPE addressed key shortcomings of the (MAPE), particularly its biases when assessing diverse that included intermittent or low-volume patterns prone to zero or near-zero actual values. Makridakis highlighted how MAPE could produce misleading asymmetries and extreme values in such cases, undermining reliable comparisons. The proposal formed part of a wider initiative in and to establish standardized metrics that balanced theoretical rigor with empirical applicability, enabling consistent evaluation of techniques amid growing demands for evidence-based . SMAPE first appeared in Makridakis' seminal paper "Accuracy measures: theoretical and practical concerns," where it was positioned as a practically oriented alternative prioritizing robustness, interpretability, and utility for practitioners over purely theoretical ideals. Its straightforward computation facilitated rapid early adoption, with testing in follow-up benchmarks like the M3-Competition of 2000, which applied it across 3003 series to gauge method performance.

Development and Adoption

Following its initial proposal in , the symmetric mean absolute percentage error (SMAPE) saw refinements amid debates over its scaling factor for percentage expression, with variations using multipliers of 100 or 200 debated in the . The version consistent with the original formulation—allowing a maximum value of 200%—prevailed by the early , as endorsed in subsequent analyses of metrics. SMAPE achieved key milestones through its adoption in major forecasting competitions, first as a core evaluation metric in the M3 Competition of 2000, which compared methods across over 3,000 datasets. It reappeared in the M4 Competition of 2018, where it was combined with the () to provide a balanced assessment of accuracy. Integration into software tools further propelled its use, including implementation in R's package for automated forecasting. Adoption expanded significantly by 2010, with SMAPE gaining traction in and for its robustness to zero values and symmetric penalization of forecast errors. A critique by Hyndman and Koehler exposed flaws like possible negative values and persistent asymmetries, spurring variants such as scaled adjustments to address these limitations. By the 2020s, SMAPE had become a standard in libraries like sktime for and evaluation. It continues to hold relevance but is often paired with amid ongoing discourse on optimal relative error metrics.

References

  1. [1]
  2. [2]
  3. [3]
    [PDF] How to Estimate Forecasting Quality: A System ... - Computer Science
    s(x,y) = |x−y|. (|x|+|y|)/2. (1). Page 3. Derivation of Symmetric Mean Absolute Percentage Error (SMAPE). 3 is known as the Symmetric Mean Absolute Performance ...
  4. [4]
    [PDF] A better measure of relative prediction accuracy - arXiv
    Surveys show that the mean absolute percentage error (MAPE) is the most widely used measure of forecast accuracy in businesses and organizations.
  5. [5]
    A new accuracy measure based on bounded relative error for time ...
    Mar 24, 2017 · In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error.
  6. [6]
    Accuracy measures: theoretical and practical concerns - ScienceDirect
    Another look at measures of forecast accuracy​​ Instead, we propose that the mean absolute scaled error become the standard measure for comparing forecast ...
  7. [7]
    A new metric of absolute percentage error for intermittent demand ...
    The symmetric mean absolute percentage error (sMAPE), proposed by Makridakis (1993), is a modified MAPE in which the divisor is half of the sum of the ...
  8. [8]
    How to Estimate Forecasting Quality: A System-Motivated Derivation ...
    How to Estimate Forecasting Quality: A System-Motivated Derivation of Symmetric Mean Absolute Percentage Error (SMAPE) and Other Similar Characteristics ...<|control11|><|separator|>
  9. [9]
    [PDF] ACCURACY MEASURES - INSEAD
    The purpose of this editorial is to examine accuracy measures from a theoretical and practical point of view and suggest a modified form of MAPE as the most.
  10. [10]
    MeanAbsolutePercentageError — sktime documentation
    if symmetric is True then calculates symmetric mean absolute percentage error (sMAPE), defined as 2 n ∑ i = 1 n | y i − y ^ i | | y i | + | y ^ i | . To avoid ...<|control11|><|separator|>
  11. [11]
    sMAPE - Oracle Help Center
    Symmetric Mean Absolute Percentage Error is a widely utilized metric within the field of forecasting and prediction models, specifically in the domain of time ...<|control11|><|separator|>
  12. [12]
  13. [13]
    [PDF] Another look at measures of forecast accuracy - Rob J Hyndman
    Nov 2, 2005 · Abstract: We discuss and compare measures of accuracy of univariate time series forecasts. The methods used in the M-competition and the ...
  14. [14]
    Mean-Based Error Measures for Intermittent Demand Forecasting
    Aug 7, 2025 · We propose several new error measures with wider applicability, and correct forecaster ranking on several intermittent demand patterns.
  15. [15]
    The M4 Competition: 100,000 time series and 61 forecasting methods
    An additional reason for using sMAPE is its continuity with the previous M Competitions, especially after mitigating its major shortcomings by excluding ...
  16. [16]
    Performance metrics for multi-step forecasting measuring win-loss ...
    Aug 20, 2024 · The studies employed metrics such as MSE, MAPE and sMAPE to evaluate forecasting strategies, using time series from a single dataset.
  17. [17]
    The coefficient of determination R-squared is more informative than ...
    Jul 5, 2021 · Our results demonstrate that the coefficient of determination (R-squared) is more informative and truthful than SMAPE, and does not have the interpretability ...<|separator|>
  18. [18]
    Hyperparameter Tuning of Load-Forecasting Models Using ... - MDPI
    Symmetric mean absolute percentage error (SMAPE): SMAPE is ... machine-learning models for hyperparameter optimization to improve the forecasting results.
  19. [19]
    Modelradar: aspect-based forecast evaluation | Machine Learning
    Sep 9, 2025 · ... SMAPE. ... For all algorithms based on machine learning, including neural networks, we perform hyperparameter optimization using random search.<|separator|>
  20. [20]
    Hyperparameter Tuning with Parallel Processing • modeltime
    Feature engineering using lagged variables & external regressors; Hyperparameter Tuning; Time series cross-validation; Ensembling Multiple Machine Learning & ...
  21. [21]
    Transformer based spinal vertebrae localization and scoliosis ...
    Sep 29, 2025 · ... image segmentation. ... We argue that the superior performance of our model as evidenced by the SMAPE metric as SMAPE represents the only metric ...
  22. [22]
    [PDF] Towards Fairness & Transparency in Algorithmic Decision Making
    identical bias at scale (part of the appeal of algorithms is how cheap they ... SMAPE. Fairness. Preserve utility, maximize fairness & minimize ...
  23. [23]
    [PDF] ANOTHER LOOK AT FORECAST-ACCURACY METRICS FOR ...
    (sMAPE) in the M3-competition (Makridakis & Hibon,. 2000). It ... I suggest that it is the best accuracy metric for intermittent demand studies and beyond.
  24. [24]
    Error measures for generalizing about forecasting methods
    For selecting the most accurate methods, we recommend the Median RAE (MdRAE) when few series are available and the Median Absolute Percentage Error (MdAPE) ...
  25. [25]
    Advantages of the MAD/mean ratio over the MAPE - ResearchGate
    Aug 6, 2025 · WMAPE represents the mean absolute error divided by the mean of the actual outcome (Kolassa and Schütz, 2007) . In contrast to either the CWRMSE ...
  26. [26]
    [PDF] Forecast accuracy measures for count data & intermittent demand
    – If you optimize your forecast method or parameters to minimize MAD and the future distribution is skewed, your forecast will be biased (Morlidge, 2015)!. – ...
  27. [27]
    [PDF] Some forecasting principles from the M4 competition - Nuffield College
    Jan 9, 2019 · OWA is the overall weighted average of sMAPE ... The MAAPE avoids the problems of small values that MAPE and sMAPE have, and has a lower bias.Missing: typical | Show results with:typical
  28. [28]
    The M3-Competition: results, conclusions and implications
    This paper describes the M3-Competition, the latest of the M-Competitions. It explains the reasons for conducting the competition and summarizes its results ...
  29. [29]
    Errors on percentage errors - Rob J Hyndman
    Apr 16, 2014 · Makridakis (1993) proposed almost the same measure, calling it the “symmetric MAPE” (sMAPE), but without crediting Armstrong (1985), defining it ...
  30. [30]
    Another look at measures of forecast accuracy - ScienceDirect
    Most textbooks recommend the use of the MAPE (e.g., Hanke & Reitsch, 1995, p.120, and Bowerman, O'Connell, & Koehler, 2004, p.18) and it was the primary measure ...
  31. [31]
    [PDF] Forecasting Functions for Time Series and Linear Models
    Apr 8, 2025 · Methods and tools for displaying and analysing univariate time series forecasts including exponen- tial smoothing via state space models and ...<|control11|><|separator|>