- Source: Mean absolute error
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement. MAE is calculated as the sum of absolute errors (i.e., the Manhattan distance) divided by the sample size:
M
A
E
=
∑
i
=
1
n
|
y
i
−
x
i
|
n
=
∑
i
=
1
n
|
e
i
|
n
.
{\displaystyle \mathrm {MAE} ={\frac {\sum _{i=1}^{n}\left|y_{i}-x_{i}\right|}{n}}={\frac {\sum _{i=1}^{n}\left|e_{i}\right|}{n}}.}
It is thus an arithmetic average of the absolute errors
|
e
i
|
=
|
y
i
−
x
i
|
{\displaystyle |e_{i}|=|y_{i}-x_{i}|}
, where
y
i
{\displaystyle y_{i}}
is the prediction and
x
i
{\displaystyle x_{i}}
the true value. Alternative formulations may include relative frequencies as weight factors. The mean absolute error uses the same scale as the data being measured. This is known as a scale-dependent accuracy measure and therefore cannot be used to make comparisons between predicted values that use different scales. The mean absolute error is a common measure of forecast error in time series analysis, sometimes used in confusion with the more standard definition of mean absolute deviation. The same confusion exists more generally.
Quantity disagreement and allocation disagreement
In remote sensing the MAE is sometimes expressed as the sum of two components: quantity disagreement and allocation disagreement. Quantity disagreement is the absolute value of the mean error:
|
∑
i
=
1
n
y
i
−
x
i
n
|
.
{\displaystyle \left|{\frac {\sum _{i=1}^{n}y_{i}-x_{i}}{n}}\right|.}
Allocation disagreement is MAE minus quantity disagreement.
It is also possible to identify the types of difference by looking at an
(
x
,
y
)
{\displaystyle (x,y)}
plot. Quantity difference exists when the average of the X values does not equal the average of the Y values. Allocation difference exists if and only if points reside on both sides of the identity line.
Related measures
The mean absolute error is one of a number of ways of comparing forecasts with their eventual outcomes. Well-established alternatives are the mean absolute scaled error (MASE), mean absolute log error (MALE), and the mean squared error. These all summarize performance in ways that disregard the direction of over- or under- prediction; a measure that does place emphasis on this is the mean signed difference.
Where a prediction model is to be fitted using a selected performance measure, in the sense that the least squares approach is related to the mean squared error, the equivalent for mean absolute error is least absolute deviations.
MAE is not identical to root-mean square error (RMSE), although some researchers report and interpret it that way. The MAE is conceptually simpler and also easier to interpret than RMSE: it is simply the average absolute vertical or horizontal distance between each point in a scatter plot and the Y=X line. In other words, MAE is the average absolute difference between X and Y. Furthermore, each error contributes to MAE in proportion to the absolute value of the error. This is in contrast to RMSE which involves squaring the differences, so that a few large differences will increase the RMSE to a greater degree than the MAE.
= Optimality property
=The mean absolute error of a real variable c with respect to the random variable X is
E
(
|
X
−
c
|
)
.
{\displaystyle E(\left|X-c\right|).}
Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, m is a sample median if and only if m minimizes the arithmetic mean of the absolute deviations.
More generally, a median is defined as a minimum of
E
(
|
X
−
c
|
−
|
X
|
)
,
{\displaystyle E(|X-c|-|X|),}
as discussed at Multivariate median (and specifically at Spatial median).
This optimization-based definition of the median is useful in statistical data-analysis, for example, in k-medians clustering.
Proof of optimality
Statement: The classifier minimising
E
|
y
−
y
^
|
{\displaystyle \mathbb {E} |y-{\hat {y}}|}
is
f
^
(
x
)
=
Median
(
y
|
X
=
x
)
{\displaystyle {\hat {f}}(x)={\text{Median}}(y|X=x)}
.
Proof:
The Loss functions for classification is
L
=
E
[
|
y
−
a
|
|
X
=
x
]
=
∫
−
∞
∞
|
y
−
a
|
f
Y
|
X
(
y
)
d
y
=
∫
−
∞
a
(
a
−
y
)
f
Y
|
X
(
y
)
d
y
+
∫
a
∞
(
y
−
a
)
f
Y
|
X
(
y
)
d
y
.
{\displaystyle {\begin{aligned}L&=\mathbb {E} [|y-a||X=x]\\&=\int _{-\infty }^{\infty }|y-a|f_{Y|X}(y)\,dy\\&=\int _{-\infty }^{a}(a-y)f_{Y|X}(y)\,dy+\int _{a}^{\infty }(y-a)f_{Y|X}(y)\,dy.\\\end{aligned}}}
Differentiating with respect to a gives
∂
∂
a
L
=
∫
−
∞
a
f
Y
|
X
(
y
)
d
y
+
∫
a
∞
−
f
Y
|
X
(
y
)
d
y
=
0.
{\displaystyle {\frac {\partial }{\partial a}}L=\int _{-\infty }^{a}f_{Y|X}(y)\,dy+\int _{a}^{\infty }-f_{Y|X}(y)\,dy=0.}
This means
∫
−
∞
a
f
(
y
)
d
y
=
∫
a
∞
f
(
y
)
d
y
.
{\displaystyle \int _{-\infty }^{a}f(y)\,dy=\int _{a}^{\infty }f(y)\,dy.}
Hence,
F
Y
|
X
(
a
)
=
0.5.
{\displaystyle F_{Y|X}(a)=0.5.}
See also
Least absolute deviations
Manhattan distance
Mean absolute percentage error
Mean percentage error
Symmetric mean absolute percentage error
References
Kata Kunci Pencarian:
- Efisiensi energi
- Daftar film Amerika tahun 1997
- Mean absolute error
- Mean absolute percentage error
- Mean absolute scaled error
- Root mean square deviation
- Errors and residuals
- Average absolute deviation
- Mean absolute difference
- Symmetric mean absolute percentage error
- Mean squared error
- Mean percentage error