**Prediction error** quantifies one of two things:

- In regression analysis, it’s a measure of how well the model predicts the response variable.
- In classification (machine learning), it’s a measure of how well samples are classified to the correct category.

Sometimes, the term is used informally to mean exactly what it means in plain English (you’ve made some predictions, and there are some errors). In regression, the term “prediction error” and “Residuals” are sometimes used synonymously. Therefore, check the author’s intent before assuming they mean something specific (like the mean squared prediction error).

## Mean Squared Prediction Error (MSPE)

MSPE summarizes the predictive ability of a model. Ideally, this value should be close to zero, which means that your predictor is close to the true value. The concept is similar to Mean Squared Error (MSE), which is a measure of the how well an estimator measures a parameter (or how close a regression line is to a set of points). The difference is that while MSE measures of an estimator’s fit, the MSPE is a measure of a predictor’s fit— or how well it predicts the true value.

## Quantifying Prediction Errors

Prediction error can be quantified in several ways, depending on where you’re using it. In general, you can analyze the behavior of prediction error with bias and variance (Johari, n.d.).

In **statistics**, the root-mean-square error (RMSE) aggregates the magnitudes of prediction errors. The Rao-Blackwell theory can estimate prediction error as well as improve the efficiency of initial estimators.

In** machine learning**, Cross-validation (CV) assesses prediction error and trains the prediction rule. A second method, the bootstrap, begins by estimating the prediction rule’s sampling distribution (or the sampling distribution’s parameters); It can also quantify prediction error and other aspects of the prediction rule.

## References

Clark, T. & West, K. (2006). Using out-of-sample mean squared prediction errors to test the martingale difference hypothesis. Journal of Econometrics.

Johari, R. MS&E 226: “Small” Data Lecture 5: In-sample estimation of prediction error (v3). Retrieved October 2, 2019 from: https://web.stanford.edu/class/msande226/lecture5_prediction.pdf

Penn State Eberly College of Science: Prediction Error. Retrieved October 2, 2019 from: https://newonlinecourses.science.psu.edu/stat555/node/116/

**Need help with a homework or test question? **With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!

*Statistical concepts explained visually* - Includes many concepts such as sample size, hypothesis tests, or logistic regression, explained by Stephanie Glen, founder of StatisticsHowTo.

**Comments? Need to post a correction?** Please post a comment on our *Facebook page*.