Mean signed deviation

From Wikipedia, the free encyclopedia
(Redirected from Mean signed difference)

In statistics, the mean signed difference (MSD), also known as mean signed deviation and mean signed error, is a sample statistic that summarises how well a set of estimates match the quantities that they are supposed to estimate. It is one of a number of statistics that can be used to assess an estimation procedure, and it would often be used in conjunction with a sample version of the mean square error.

For example, suppose a linear regression model has been estimated over a sample of data, and is then used to extrapolate predictions of the dependent variable out of sample after the out-of-sample data points have become available. Then would be the i-th out-of-sample value of the dependent variable, and would be its predicted value. The mean signed deviation is the average value of

Definition[edit]

The mean signed difference is derived from a set of n pairs, , where is an estimate of the parameter in a case where it is known that . In many applications, all the quantities will share a common value. When applied to forecasting in a time series analysis context, a forecasting procedure might be evaluated using the mean signed difference, with being the predicted value of a series at a given lead time and being the value of the series eventually observed for that time-point. The mean signed difference is defined to be

Use Cases[edit]

The mean signed difference is often useful when the estimations are biased from the true values in a certain direction. If the estimator that produces the values is unbiased, then . However, if the estimations are produced by a biased estimator, then the mean signed difference is a useful tool to understand the direction of the estimator's bias.

See also[edit]