MASE (Mean Absolute Scaled Error)

Eval

MASE (Mean Absolute Scaled Error)

Created: Last updated: Read time: 2 min
まとめ
  • MASE scales absolute error by the seasonal naïve forecast, enabling fair comparisons across series.
  • Compute MASE in Python and check whether your model beats the naïve baseline.
  • Understand the impact of the seasonal period parameter and how to handle zero denominators.

1. Definition #

$$ \mathrm{MASE} = \frac{\frac{1}{n} \sum_{i=1}^{n} | y_i - \hat{y}i |}{\frac{1}{n-m} \sum{t=m+1}^{n} | y_t - y_{t-m} |} $$

  • \(m\) is the seasonal period (1 for non-seasonal series).
  • Denominator = mean absolute error of the seasonal naïve forecast.
  • MASE < 1 means the model outperforms the seasonal naïve benchmark.

2. Python implementation #

scikit-learn does not provide MASE directly; implement it manually:

import numpy as np

def mase(y_true: np.ndarray, y_pred: np.ndarray, m: int = 1) -> float:
    """Mean Absolute Scaled Error."""
    y_true = np.asarray(y_true)
    y_pred = np.asarray(y_pred)
    scale = np.mean(np.abs(y_true[m:] - y_true[:-m]))
    if scale == 0:
        return float("nan")
    return float(np.mean(np.abs(y_true - y_pred)) / scale)

Handle zero or near-zero denominators by filtering such series or adding a small epsilon.


3. Interpretation #

  • MASE < 1: better than seasonal naïve.
  • MASE = 1: equivalent performance.
  • MASE > 1: worse than the baseline.

Because the metric is scaled, it can be averaged across different time series without bias.


4. Practical usage #

  • Demand forecasting: compare multiple models across SKUs with unequal volumes.
  • Model selection: pick the model with the lowest MASE as it normalises for seasonal effects.
  • Reporting: highlight relative improvement (“20% better than naïve”) rather than absolute numbers only.

5. Caveats #

  • When the seasonal naïve error is zero (very smooth series), MASE becomes undefined—review the chosen period.
  • Short series can yield unstable denominators; ensure sufficient length or use rolling estimates.
  • Combine with MAE/RMSE to retain insight into absolute error magnitude.

Summary #

  • MASE benchmarks models against a simple yet strong baseline, making performance comparable across series.
  • The m parameter allows it to handle seasonal patterns while avoiding MAPE’s zero-demand issues.
  • Pair it with absolute metrics to balance relative improvement and real-world impact.