- Source: Robust measures of scale
In statistics, robust measures of scale are methods that quantify the statistical dispersion in a sample of numerical data while resisting outliers. The most common such robust statistics are the interquartile range (IQR) and the median absolute deviation (MAD). These are contrasted with conventional or non-robust measures of scale, such as sample standard deviation, which are greatly influenced by outliers.
These robust statistics are particularly used as estimators of a scale parameter, and have the advantages of both robustness and superior efficiency on contaminated data, at the cost of inferior efficiency on clean data from distributions such as the normal distribution. To illustrate robustness, the standard deviation can be made arbitrarily large by increasing exactly one observation (it has a breakdown point of 0, as it can be contaminated by a single point), a defect that is not shared by robust statistics.
IQR and MAD
One of the most common robust measures of scale is the interquartile range (IQR), the difference between the 75th percentile and the 25th percentile of a sample; this is the 25% trimmed range, an example of an L-estimator. Other trimmed ranges, such as the interdecile range (10% trimmed range) can also be used.
For a Gaussian distribution, IQR is related to
σ
{\displaystyle \sigma }
as:
σ
≈
0.7413
IQR
=
IQR
/
1.349
{\displaystyle \sigma \approx 0.7413\operatorname {IQR} =\operatorname {IQR} /1.349}
Another familiar robust measure of scale is the median absolute deviation (MAD), the median of the absolute values of the differences between the data values and the overall median of the data set; for a Gaussian distribution, MAD is related to
σ
{\displaystyle \sigma }
as:
σ
≈
1.4826
MAD
≈
MAD
/
0.6745
{\displaystyle \sigma \approx 1.4826\operatorname {MAD} \approx \operatorname {MAD} /0.6745}
See Median absolute deviation#Relation to standard deviation for details.
Estimation
Robust measures of scale can be used as estimators of properties of the population, either for parameter estimation or as estimators of their own expected value.
For example, robust estimators of scale are used to estimate the population standard deviation, generally by multiplying by a scale factor to make it an unbiased consistent estimator; see scale parameter: estimation. For example, dividing the IQR by 2√2 erf−1(1/2) (approximately 1.349), makes it an unbiased, consistent estimator for the population standard deviation if the data follow a normal distribution.
In other situations, it makes more sense to think of a robust measure of scale as an estimator of its own expected value, interpreted as an alternative to the population standard deviation as a measure of scale. For example, the MAD of a sample from a standard Cauchy distribution is an estimator of the population MAD, which in this case is 1, whereas the population variance does not exist.
Efficiency
These robust estimators typically have inferior statistical efficiency compared to conventional estimators for data drawn from a distribution without outliers (such as a normal distribution), but have superior efficiency for data drawn from a mixture distribution or from a heavy-tailed distribution, for which non-robust measures such as the standard deviation should not be used.
For example, for data drawn from the normal distribution, the MAD is 37% as efficient as the sample standard deviation, while the Rousseeuw–Croux estimator Qn is 88% as efficient as the sample standard deviation.
Absolute pairwise differences
Rousseeuw and Croux propose alternatives to the MAD, motivated by two weaknesses of it:
It is inefficient (37% efficiency) at Gaussian distributions.
it computes a symmetric statistic about a location estimate, thus not dealing with skewness.
They propose two alternative statistics based on pairwise differences: Sn and Qn, defined as:
S
n
:=
1.1926
med
i
(
med
j
(
|
x
i
−
x
j
|
)
)
,
Q
n
:=
c
n
first quartile of
(
|
x
i
−
x
j
|
:
i
<
j
)
,
{\displaystyle {\begin{aligned}S_{n}&:=1.1926\,\operatorname {med} _{i}\left(\operatorname {med} _{j}(\,\left|x_{i}-x_{j}\right|\,)\right),\\Q_{n}&:=c_{n}{\text{first quartile of}}\left(\left|x_{i}-x_{j}\right|:i
where
c
n
{\displaystyle c_{n}}
is a constant depending on
n
{\displaystyle n}
.
These can be computed in O(n log n) time and O(n) space.
Neither of these requires location estimation, as they are based only on differences between values. They are both more efficient than the MAD under a Gaussian distribution: Sn is 58% efficient, while Qn is 82% efficient.
For a sample from a normal distribution, Sn is approximately unbiased for the population standard deviation even down to very modest sample sizes (<1% bias for n = 10).
For a large sample from a normal distribution, 2.22Qn is approximately unbiased for the population standard deviation. For small or moderate samples, the expected value of Qn under a normal distribution depends markedly on the sample size, so finite-sample correction factors (obtained from a table or from simulations) are used to calibrate the scale of Qn.
The biweight midvariance
Like Sn and Qn, the biweight midvariance aims to be robust without sacrificing too much efficiency. It is defined as
n
∑
i
=
1
n
(
x
i
−
Q
)
2
(
1
−
u
i
2
)
4
I
(
|
u
i
|
<
1
)
(
∑
i
(
1
−
u
i
2
)
(
1
−
5
u
i
2
)
I
(
|
u
i
|
<
1
)
)
2
,
{\displaystyle {\frac {n\sum _{i=1}^{n}(x_{i}-Q)^{2}(1-u_{i}^{2})^{4}I(|u_{i}|<1)}{\left(\sum _{i}(1-u_{i}^{2})(1-5u_{i}^{2})I(|u_{i}|<1)\right)^{2}}},}
where I is the indicator function, Q is the sample median of the Xi, and
u
i
=
x
i
−
Q
9
⋅
M
A
D
.
{\displaystyle u_{i}={\frac {x_{i}-Q}{9\cdot {\rm {MAD}}}}.}
Its square root is a robust estimator of scale, since data points are downweighted as their distance from the median increases, with points more than 9 MAD units from the median having no influence at all.
Extensions
Mizera & Müller (2004) propose a robust depth-based estimator for location and scale simultaneously. They propose a new measure named the Student median.
Confidence intervals
A robust confidence interval is a robust modification of confidence intervals, meaning that one modifies the non-robust calculations of the confidence interval so that they are not badly affected by outlying or aberrant observations in a data-set.
= Example
=In the process of weighing 1000 objects, under practical conditions, it is easy to believe that the operator might make a mistake in procedure and so report an incorrect mass (thereby making one type of systematic error). Suppose there were 100 objects and the operator weighed them all, one at a time, and repeated the whole process ten times. Then the operator can calculate a sample standard deviation for each object, and look for outliers. Any object with an unusually large standard deviation probably has an outlier in its data. These can be removed by various non-parametric techniques. If the operator repeated the process only three times, simply taking the median of the three measurements and using σ would give a confidence interval. The 200 extra weighings served only to detect and correct for operator error and did nothing to improve the confidence interval. With more repetitions, one could use a truncated mean, discarding the largest and smallest values and averaging the rest. A bootstrap calculation could be used to determine a confidence interval narrower than that calculated from σ, and so obtain some benefit from a large amount of extra work.
These procedures are robust against procedural errors which are not modeled by the assumption that the balance has a fixed known standard deviation σ. In practical applications where the occasional operator error can occur, or the balance can malfunction, the assumptions behind simple statistical calculations cannot be taken for granted. Before trusting the results of 100 objects weighed just three times each to have confidence intervals calculated from σ, it is necessary to test for and remove a reasonable number of outliers (testing the assumption that the operator is careful and correcting for the fact that he is not perfect), and to test the assumption that the data really have a normal distribution with standard deviation σ.
Computer simulation
The theoretical analysis of such an experiment is complicated, but it is easy to set up a spreadsheet which draws random numbers from a normal distribution with standard deviation σ to simulate the situation; this can be done in Microsoft Excel using =NORMINV(RAND(),0,σ)), as discussed in and the same techniques can be used in other spreadsheet programs such as in OpenOffice.org Calc and gnumeric.
After removing obvious outliers, one could subtract the median from the other two values for each object, and examine the distribution of the 200 resulting numbers. It should be normal with mean near zero and standard deviation a little larger than σ. A simple Monte Carlo spreadsheet calculation would reveal typical values for the standard deviation (around 105 to 115% of σ). Or, one could subtract the mean of each triplet from the values, and examine the distribution of 300 values. The mean is identically zero, but the standard deviation should be somewhat smaller (around 75 to 85% of σ).
See also
Heteroscedasticity-consistent standard errors
Interquartile Range
Mean Absolute Deviation
References
Kata Kunci Pencarian:
- Statistika nonparametrik
- Keanekaragaman hayati
- Pemotongan kelamin perempuan
- Bernie Sanders
- Variabel acak
- Robust measures of scale
- Statistical dispersion
- Interquartile range
- Quartile coefficient of dispersion
- QN
- L-estimator
- Speeded up robust features
- Robust statistics
- Median absolute deviation
- List of statistics articles