Fact-checked by Grok 2 weeks ago

Mean percentage error

The mean percentage error (MPE) is a statistical used primarily in to quantify the deviation between predicted and actual values, serving as an indicator of systematic in predictive models. It differs from absolute measures by preserving the of errors, allowing positive (over-forecasting) and negative (under-forecasting) deviations to each other in the . This relative measure scales errors into units, facilitating comparisons across datasets with varying scales or units. The MPE is calculated using the formula:
MPE = (100 / n) × Σ [(a_t - f_t) / a_t]
where n is the number of observations, a_t is the actual value at time t, and f_t is the forecasted value at time t. This computation essentially averages the individual percentage errors, defined as ((actual - forecast) / actual) × 100 for each period. A positive MPE suggests a tendency to over-forecast on average, while a negative value indicates under-forecasting, making it a valuable proxy for assessing bias in demand planning, economic projections, and other time-series predictions. Despite its utility for bias detection, MPE has notable limitations. The cancellation of opposing errors can result in a deceptively low value even when individual errors are large, masking poor overall accuracy. Furthermore, the metric is undefined or infinite when actual values equal zero, as occurs, which poses challenges in datasets with intermittent or zero-demand scenarios. For these reasons, MPE is often used alongside scale-independent accuracy metrics like the (MAPE) to provide a more complete evaluation of forecast performance.

Definition and Calculation

Definition

The mean percentage error (MPE) is a statistical measure that assesses forecast accuracy by computing the average of signed percentage errors between predicted and observed values across a dataset. This approach quantifies the relative deviation of forecasts from actual outcomes, expressed as percentages, enabling evaluation of predictive performance in a normalized manner. MPE has been used in forecasting and statistical analysis since at least the late 20th century. In statistics and forecasting, MPE serves as an indicator of directional deviation, preserving the sign of individual errors to reveal systematic tendencies toward over- or under-prediction without taking absolute values. A positive MPE suggests overall overestimation, while a negative value indicates underestimation, thus facilitating bias detection in models. Unlike the related (MAPE), which focuses on absolute deviations, MPE highlights bias direction.

Formula

The (MPE) is defined as the of the individual percentage errors over n . The percentage error for the t-th observation, denoted p_t, is computed as p_t = 100 \times \frac{a_t - f_t}{a_t}, where a_t represents the actual value at time or index t, and f_t represents the corresponding forecasted value. The MPE is then obtained by averaging these percentage errors: \text{MPE} = \frac{1}{n} \sum_{t=1}^n p_t = \frac{100}{n} \sum_{t=1}^n \frac{a_t - f_t}{a_t}. This formulation assumes a_t > 0 for all t to avoid division by zero. The scaling factor of 100 converts the relative errors into percentage units, facilitating intuitive interpretation of the metric as an average bias in percent terms. This standard formula derives from first calculating the signed relative error (a_t - f_t)/a_t for each observation to capture directionality, scaling it by 100 to form p_t, and finally taking the unweighted across all n periods. In cases of non-uniform datasets where observations vary significantly in or , a weighted variant (WMPE) may be applied, such as \text{WMPE} = \left( \frac{\sum_{i=1}^n e_i}{\sum_{i=1}^n a_i} \right) \times 100, where e_i = a_i - f_i is the error for the i-th observation; this effectively weights each relative error by the actual value a_i, emphasizing larger observations in the average.

Properties and Interpretation

Indication of Bias

The mean percentage error (MPE) serves as a signed metric that detects systematic bias in forecasting models by preserving the direction of individual errors. A positive MPE value indicates under-forecasting, where predictions are systematically lower than actual values on average, while a negative MPE signals over-forecasting, with predictions exceeding actuals. This directional sensitivity allows practitioners to identify and adjust for consistent tendencies in model outputs, unlike absolute error measures such as the mean absolute percentage error (MAPE), which disregard error signs and thus cannot distinguish between over- and under-prediction. In interpreting MPE, values close to zero suggest unbiased forecasts, implying that the model neither systematically over- nor under-predicts. The magnitude of the MPE provides insight into the strength of the ; for instance, an MPE of +5% reflects an average underestimation of actual values by 5%, while -3% indicates a 3% overestimation. These thresholds help quantify the practical impact of , guiding decisions on model refinement or recalibration to minimize directional inaccuracies. To assess whether an observed MPE represents statistically significant , MPE can be analyzed using or hypothesis tests, such as a t-test on the underlying errors to determine if the mean differs significantly from zero. If the 95% for MPE excludes zero, this provides evidence of non-random in the forecasts, enabling rigorous evaluation of model reliability beyond point estimates.

Mathematical Properties

The mean percentage error (MPE) is defined as the average of the individual percentage errors, given by the formula \text{MPE} = \frac{100}{n} \sum_{i=1}^{n} \frac{y_i - \hat{y}_i}{y_i}, where y_i is the actual , \hat{y}_i is the predicted , and n is the number of observations. This formulation reveals that MPE is linear in the errors e_i = y_i - \hat{y}_i, as it can be rewritten as \text{MPE} = \frac{100}{n} \sum_{i=1}^{n} \frac{e_i}{y_i}, a of the e_i for fixed actual values y_i. Consequently, scaling all errors by a constant factor k results in the MPE scaling by the same factor, facilitating additive assessments of when aggregating across disjoint subsets of data with similar characteristics, such as equal sample sizes and comparable y_i scales. A key advantage of MPE is its , stemming from the percentage-based . Since both the numerator () and denominator (actual value) are in the same units, MPE remains unchanged if all actual and predicted values are multiplied by a positive constant, such as converting from dollars to euros. This property allows MPE to be compared across datasets with different measurement scales, unlike unit-dependent metrics like . Unlike absolute measures, MPE exhibits in its treatment of over- and under-predictions through the signed nature of the , permitting positive and negative deviations to cancel each other out in the . For instance, a +10% paired with a -10% yields a net zero contribution to MPE, which can lead to underestimation of overall inaccuracy if are balanced but individually large. This cancellation effect highlights MPE's utility for detecting systematic but underscores its distinction from metrics like , which prevent such offsetting. In large samples, under assumptions of independent and identically distributed percentage errors with finite mean (e.g., in forecasting processes), MPE converges in probability to the expected percentage error by the , providing a of . This asymptotic behavior assumes no and finite variance, ensuring reliable inference as sample size increases.

Limitations and Challenges

Undefined Cases

The mean percentage error (MPE) becomes in cases where any actual value a_t = 0, as the formula involves division by the actual value in the denominator, resulting in . This issue arises because the percentage error for each observation is computed as \frac{a_t - f_t}{a_t} \times 100, where f_t is the forecast, and the mean is taken across all observations; when a_t = 0, the term is mathematically indeterminate. Such undefined cases are particularly prevalent in applications involving intermittent or sporadic , where actual values frequently include zeros, such as in zero periods, event counts with no occurrences, or non-positive quantities in . For instance, in for products with irregular patterns, zero actuals can represent periods of no , making standard percentage-based metrics like MPE inapplicable without adjustment. To mitigate this problem, common strategies include excluding observations where a_t = 0 from the MPE calculation, which preserves computability but may lead to biased evaluations if zeros are frequent. Another approach is to add a small positive constant \epsilon (e.g., $10^{-9}) to the denominator, modifying it to a_t + \epsilon, though this can introduce minor distortions in the error estimate, especially for small non-zero values. Alternatively, practitioners may switch to logarithmic or symmetric percentage error variants, such as the symmetric mean absolute percentage error (sMAPE), which avoids division by zero by using the average of actual and forecast in the denominator. The presence of undefined cases can result in incomplete or unreliable model evaluations, as excluding reduces the sample and potentially overlooks critical low-demand scenarios. To address this, it is recommended to pre-process datasets by identifying and flagging actuals upfront, allowing for consistent application of appropriate metrics across analyses.

Sensitivity to

The relative nature of the mean percentage error (MPE) renders it highly to the of actual values, particularly when those values are small but non-. In the percentage error calculation, (a_t - f_t)/a_t, even modest deviations can produce disproportionately large relative errors when a_t is close to , amplifying the perceived inaccuracy of . For instance, if the actual value a_t = 1 and the forecast f_t = 2, the resulting percentage error is -100%, illustrating how small denominators exaggerate errors far beyond their . This amplification effect is a well-documented limitation in , where percentage-based metrics penalize inaccuracies in low-scale observations more severely than in high-scale ones. In datasets with heterogeneous scales, such as sales records ranging from $1 to $1000 across items, MPE calculations can become dominated by low-value observations, skewing the overall and reducing its reliability for evaluating forecast performance across the entire . Low-volume items contribute outsized errors that overshadow errors in higher-volume items, leading to a biased of model accuracy. This issue is particularly pronounced in diverse or multi-series scenarios, where varying magnitudes distort comparative assessments. To mitigate these scale-related sensitivities, MPE should be used in conjunction with scale-robust metrics, such as the , which normalizes errors relative to in-sample naive forecasts and avoids division by small values. Data normalization techniques, like logarithmic transformations, can also help stabilize scales before applying MPE, especially in contexts like intermittent where low values are common. In such scenarios, combining MPE with absolute error measures provides a more balanced evaluation. Empirical studies on intermittent and low-volume series have demonstrated MPE's , with post-2000s research showing that it often yields unstable rankings of methods due to in sparse environments. For example, analyses of intermittent patterns reveal that MPE can produce counter-intuitive results, overemphasizing errors in periods of low activity and undermining its utility for in real-world applications like inventory management. This is evident in comparisons across models, where MPE fails to consistently identify superior forecasters compared to mean-based alternatives.

Versus Mean Absolute Percentage Error

The mean percentage error (MPE) and (MAPE) are both relative measures of forecast accuracy, but they differ fundamentally in how they treat errors. MPE calculates the average of signed percentage errors, given by \frac{100}{n} \sum_{t=1}^{n} \frac{a_t - f_t}{a_t}, where a_t is the actual value and f_t is the forecast at time t, allowing it to indicate the direction of (positive for under-forecasting, negative for over-forecasting). In contrast, MAPE uses values to focus on the magnitude of errors, defined as \frac{100}{n} \sum_{t=1}^{n} \left| \frac{a_t - f_t}{a_t} \right|, ignoring the sign to provide a measure of overall inaccuracy. This signed versus absolute distinction means MPE can detect systematic over- or under-prediction, while MAPE quantifies the typical error size without regard to direction. MPE is particularly useful for bias detection in applications, such as where consistent underestimation could lead to stockouts, whereas MAPE is preferred for assessing general accuracy across models or datasets since it avoids cancellation and yields a non-negative value interpretable as a deviation. For instance, MAPE will always be positive and reflects the proportional , making it suitable for comparing performance on series with different scales, but MPE can equal zero even if substantial errors exist, as positive and negative deviations may balance out, potentially understating the true inaccuracy. In practice, analysts often compute both to gain complementary insights: MPE for directional trends and MAPE for magnitude. Both metrics share vulnerabilities that limit their robustness. They become undefined when any actual value a_t = 0, as occurs, a common issue in datasets with intermittent or zeros, such as sales records. Additionally, they exhibit high sensitivity to small actual values, where even minor forecast errors can produce disproportionately large percentages, amplifying outliers and distorting overall assessment. These properties necessitate preprocessing, like excluding zero values or using scaled alternatives, to ensure reliable interpretation. Historically, MAPE gained prominence in the through its adoption as the primary accuracy metric in the M-Competition, a seminal organized by Spyros Makridakis and colleagues, which compared methods on real time series data and established MAPE's role in simplicity versus complexity. MPE, as a bias-oriented complement to absolute measures like MAPE, appeared more frequently in literature during the , often alongside discussions of error properties in textbooks and empirical studies seeking to address MAPE's directional limitations.

Versus Mean Error

The mean error (ME), also known as average , measures the average signed deviation between actual values and forecasts in the original units of the , calculated as ME = \frac{1}{n} \sum_{t=1}^n (a_t - f_t), where a_t is the actual value and f_t is the forecast at time t. In contrast, the mean percentage error (MPE) normalizes these signed errors by the actual values to express on a percentage scale, given by MPE = \frac{100}{n} \sum_{t=1}^n \frac{a_t - f_t}{a_t}, enabling direct comparability across datasets with varying scales or units. This scaling transforms ME into a relative , highlighting proportional rather than absolute differences. MPE offers advantages over ME by being unit-free, which facilitates comparisons in heterogeneous environments, such as evaluating forecasts across multiple with different magnitudes. For instance, ME remains tied to the data's scale, making it less intuitive for cross-dataset analysis, whereas MPE provides a standardized interpretation of . However, MPE is more sensitive to small actual values, potentially amplifying errors when a_t is near zero and risking undefined results if any a_t = 0, while ME maintains stability in such cases but fails to convey relative impact. In practice, ME is suitable for assessing absolute bias in datasets with consistent units, such as single-series production forecasts where raw deviations matter for operational decisions. Conversely, MPE excels in multi-product or cross-series , like involving diverse item volumes, where relative bias informs scalability and prioritization. Both metrics share a signed to detect systematic over- or under-prediction, but MPE's relativity better supports across varied applications.

Applications and Examples

Use in Forecasting

Mean percentage error (MPE) finds primary application in time series forecasting across domains such as and sales prediction, economic indicators, , and , where it serves as a key indicator of systematic bias in predictive models. In contexts, MPE helps evaluate forecast accuracy for inventory planning by quantifying average deviations in percentage terms, enabling adjustments to avoid overstocking or shortages. Similarly, in and , it assesses biases in projections like GDP or commodity prices, as seen in evaluations of U.S. Department of Agriculture long-term agricultural forecasts, where MPE revealed tendencies toward over- or under-forecasting harvested areas. In workflows, MPE is integrated to tune models toward unbiased predictions by identifying directional errors that might otherwise cancel out in absolute measures, often combined with metrics like (MAPE) or root mean squared error (RMSE) for a holistic of both and accuracy. This approach supports in , such as refining demand plans in to minimize operational costs from biased estimates. Software tools facilitate MPE computation in forecasting pipelines. In R, the forecast package's accuracy() function directly calculates MPE alongside other error metrics for models. Python implementations are available through libraries like for custom calculations or extensions in forecasting packages such as statsmodels, where MPE is derived from mean error scaled by actual values. In Excel, MPE can be computed using array formulas like AVERAGE((actual - forecast)/actual * 100) across datasets for quick model evaluation. Post-2000s advancements in computational forecasting have highlighted MPE's role in detecting biases in specialized applications. For instance, in weather prediction, services like employ MPE to measure errors in sum forecasts, aiding improvements in model reliability for hydrological planning. In stock price forecasting, a 2024 study on hybrid CNN-LSTM models for share prices reported an MPE of 2.438%, demonstrating low in predictions for volatile markets and underscoring MPE's utility in financial algorithm tuning.

Numerical Example

To illustrate the calculation of the mean percentage error (MPE), consider a hypothetical from five time periods in a sales forecasting scenario, with the following actual values A_i and forecasted values F_i:
PeriodActual (A_i)Forecast (F_i)
110095
2150139
3200180
45060
510090
The MPE is computed as the of the individual , where each is \text{PE}_i = \frac{A_i - F_i}{A_i} \times 100\%. The step-by-step calculations are as follows:
  • Period 1: \text{PE}_1 = \frac{100 - 95}{100} \times 100\% = 5\%
  • Period 2: \text{PE}_2 = \frac{150 - 139}{150} \times 100\% \approx 7.33\%
  • Period 3: \text{PE}_3 = \frac{200 - 180}{200} \times 100\% = 10\%
  • Period 4: \text{PE}_4 = \frac{50 - 60}{50} \times 100\% = -20\%
  • Period 5: \text{PE}_5 = \frac{100 - 90}{100} \times 100\% = 10\%
The mean percentage error is then \text{MPE} = \frac{5 + 7.33 + 10 + (-20) + 10}{5} \approx 2.5\%. This positive MPE value indicates a slight toward under- across the periods, meaning the model systematically predicts values lower than the actual outcomes. In practice, such a result might prompt adjustments to the forecasting model, such as incorporating trend factors or recalibrating parameters to reduce the underestimation and improve overall bias neutrality. To highlight the sensitivity of MPE to small actual values, consider a variation where the actual value in Period 4 is changed to 5 (while keeping the absolute forecast error similar at 10 units, so F_4 = 15). The other periods remain unchanged, yielding:
  • Period 4 (varied): \text{PE}_4 = \frac{5 - 15}{5} \times 100\% = -200\%
The revised MPE becomes \frac{5 + 7.33 + 10 + (-200) + 10}{5} \approx -33.5\%, a dramatic shift due to the division by the small denominator, which amplifies the percentage error even for modest absolute discrepancies. This demonstrates how MPE can become unstable or misleading when actual values approach zero, a common challenge in datasets with low-volume periods.

References

  1. [1]
    MPE (Mean Percentage Error) - Oracle Help Center
    MPE is the mean percentage error (or deviation). It is a relative measure that essentially scales ME to be in percentage units instead of the variable's units.Missing: definition | Show results with:definition
  2. [2]
    Forecast Error Metrics to Assess Performance - IBF.org
    Feb 5, 2025 · Average percent of error, a measure of variation. Forecast accuracy and sometimes as an average MPE used for proxy on bias. Mean Absolute ...
  3. [3]
    Mean percentage error — mpe - yardstick
    Calculate the mean percentage error. This metric is in relative units. It can be used as a measure of the estimate 's bias.
  4. [4]
    Case Studies in Public Budgeting and Financial Management
    Mean percentage error is the average of a percentage error which exists due to the difference between the observed value and expected value (Khan and ...
  5. [5]
    What's the bottom line? How to compare models - Duke People
    The mean error (ME) and mean percentage error (MPE) that are reported in some statistical procedures are signed measures of error which indicate whether the ...Missing: definition | Show results with:definition
  6. [6]
  7. [7]
    [PDF] PEMD-91-16 Short-Term Forecasting: Accuracy of USDA's ... - GAO
    or the sum of the absolute percentage error (absolute error for each ... Weighted mean percentage error (WMPE) is defined as ... percent of the formula quantity.
  8. [8]
    [PDF] Forecasting with moving averages - Duke People
    ME: mean error (this indicates whether forecasts are biased high or low—should be close to 0). MPE: mean percentage error (ditto in percentage terms). 4 For a ...
  9. [9]
    Forecast Error Measures - SAP Help Portal
    Mean Percentage Error (MPE). Calculated for percentage difference between forecasted demand and actual sales using the following formula: ; Mean Absolute ...
  10. [10]
    Forecast bias and accuracy of exchange rates in emerging markets
    Tests of forecast bias. The first test used to detect a forecast bias is a t-test applied to the mean percentage forecast error to determine whether it is ...
  11. [11]
    Forecast estimation, evaluation and transformation - Rob J Hyndman
    Nov 10, 2010 · If the data contain zeros, the MAPE can be infinite as it will involve division by zero. If the data contain very small numbers, the MAPE can be ...
  12. [12]
    [PDF] ANOTHER LOOK AT FORECAST-ACCURACY METRICS FOR ...
    Rob Hyndman summarizes these forecast accuracy metrics and explains their potential failings. He also introduces a new metric—the mean absolute scaled error. ( ...
  13. [13]
    A new metric of absolute percentage error for intermittent demand ...
    The mean absolute percentage error (MAPE) is one of the most widely used measures of forecast accuracy, due to its advantages of scale-independency and ...<|control11|><|separator|>
  14. [14]
  15. [15]
    3.4 Evaluating forecast accuracy | Forecasting: Principles ... - OTexts
    Thus, the measure still involves division by a number close to zero, making the calculation unstable. Also, the value of sMAPE can be negative, so it is not ...
  16. [16]
    Mean-based error measures for intermittent demand forecasting
    Apr 15, 2014 · Mean-based error measures for intermittent demand forecasting. Steven Prestwicha∗, Roberto Rossib, S. Armagan Tarimc and Brahim Hnichd.
  17. [17]
    Another look at measures of forecast accuracy - ScienceDirect.com
    We discuss and compare measures of accuracy of univariate time series forecasts. The methods used in the M-competition as well as the M3-competition, ...
  18. [18]
    The accuracy of extrapolation (time series) methods: Results of a ...
    The accuracy of extrapolation (time series) methods: Results of a forecasting competition. S. Makridakis,. S. Makridakis. INSEAD, 77305 Fontainebleau, France. A ...
  19. [19]
    MPE (Mean Percentage Error) - Oracle Help Center
    ME (Mean Error) · Modified Z-Score · MPE (Mean Percentage Error) · RMSE (Root Mean Squared Error) · RPD (Relative Percentage Difference) · sMAPE · Standard ...Missing: comparison | Show results with:comparison
  20. [20]
  21. [21]
    How to compare regression models - Duke People
    The mean error (ME) and mean percentage error (MPE) that are reported in some statistical procedures are signed measures of error which indicate whether the ...
  22. [22]
    Designing a Demand Forecasting KPI Framework for Supply Chain ...
    Forecast bias analysis through Mean Percentage Error (MPE) is central to identifying systematic under/over-forecasting patterns. Persistent bias can lead to ...
  23. [23]
  24. [24]
    accuracy.default: Accuracy measures for a forecast model - rdrr.io
    MPE: Mean Percentage Error. MAPE: Mean Absolute Percentage Error. MASE: Mean Absolute Scaled Error. ACF1: Autocorrelation of errors at lag 1. By default, the ...
  25. [25]
    A guide on regression error metrics (MSE, RMSE, MAE, MAPE ...
    Aug 18, 2022 · The mean percentage error (MPE) equation is exactly like that of MAPE. The only difference is that it lacks absolute value operation. MPE ...Linear Regression Review · Mean Absolute Error (mae) · Mean Percentage ErrorMissing: weather | Show results with:weather
  26. [26]
    How to Calculate the Error Percentage in Excel - 3 Methods
    Oct 15, 2022 · Enter the following formula in E9. =SUM(E5:E7)/COUNT(E5:E7). Press Enter to see the absolute mean percentage error.
  27. [27]
    Precipitation sums - meteoblue
    The lowest MAE (hence the most accurate model) is the reanalysis model ERA5. Mean absolute error (MAE) [mm], mean bias error (MBE) [mm], mean percentage error ( ...
  28. [28]
    [PDF] Forecasting of Share Prices Based on Hybrid Model of CNN and LSTM
    Furthermore, the Mean Percentage Error (MPE) of. 2.4380 shows that there is not much of an absolute difference between the actual and anticipated stock.
  29. [29]
    [PDF] Evaluation of errors in national energy forecasts
    In this work I use two metrics to determine forecast error: mean percentage error and mean absolute percentage error. Mean percentage error (MPE) is an ...
  30. [30]
    [PDF] PDF
    Percent error: measuring accuracy. •Calculating Mean Absolute Percentage Error (MAPE) and. RMSE might also suffer problems that arise from the distribution ...<|control11|><|separator|>