- Source: Completeness (statistics)
In statistics, completeness is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset. It is opposed to the concept of an ancillary statistic. While an ancillary statistic contains no information about the model parameters, a complete statistic contains only information about the parameters, and no ancillary information. It is closely related to the concept of a sufficient statistic which contains all of the information that the dataset provides about the parameters.
Definition
Consider a random variable X whose probability distribution belongs to a parametric model Pθ parametrized by θ.
Say T is a statistic; that is, the composition of a measurable function with a random sample X1,...,Xn.
The statistic T is said to be complete for the distribution of X if, for every measurable function g,
if
E
θ
(
g
(
T
)
)
=
0
for all
θ
then
P
θ
(
g
(
T
)
=
0
)
=
1
for all
θ
.
{\displaystyle {\text{if }}\operatorname {E} _{\theta }(g(T))=0{\text{ for all }}\theta {\text{ then }}\mathbf {P} _{\theta }(g(T)=0)=1{\text{ for all }}\theta .}
The statistic T is said to be boundedly complete for the distribution of X if this implication holds for every measurable function g that is also bounded.
= Example: Bernoulli model
=The Bernoulli model admits a complete statistic. Let X be a random sample of size n such that each Xi has the same Bernoulli distribution with parameter p. Let T be the number of 1s observed in the sample, i.e.
T
=
∑
i
=
1
n
X
i
{\displaystyle \textstyle T=\sum _{i=1}^{n}X_{i}}
. T is a statistic of X which has a binomial distribution with parameters (n,p). If the parameter space for p is (0,1), then T is a complete statistic. To see this, note that
E
p
(
g
(
T
)
)
=
∑
t
=
0
n
g
(
t
)
(
n
t
)
p
t
(
1
−
p
)
n
−
t
=
(
1
−
p
)
n
∑
t
=
0
n
g
(
t
)
(
n
t
)
(
p
1
−
p
)
t
.
{\displaystyle \operatorname {E} _{p}(g(T))=\sum _{t=0}^{n}{g(t){n \choose t}p^{t}(1-p)^{n-t}}=(1-p)^{n}\sum _{t=0}^{n}{g(t){n \choose t}\left({\frac {p}{1-p}}\right)^{t}}.}
Observe also that neither p nor 1 − p can be 0. Hence
E
p
(
g
(
T
)
)
=
0
{\displaystyle E_{p}(g(T))=0}
if and only if:
∑
t
=
0
n
g
(
t
)
(
n
t
)
(
p
1
−
p
)
t
=
0.
{\displaystyle \sum _{t=0}^{n}g(t){n \choose t}\left({\frac {p}{1-p}}\right)^{t}=0.}
On denoting p/(1 − p) by r, one gets:
∑
t
=
0
n
g
(
t
)
(
n
t
)
r
t
=
0.
{\displaystyle \sum _{t=0}^{n}g(t){n \choose t}r^{t}=0.}
First, observe that the range of r is the positive reals. Also, E(g(T)) is a polynomial in r and, therefore, can only be identical to 0 if all coefficients are 0, that is, g(t) = 0 for all t.
It is important to notice that the result that all coefficients must be 0 was obtained because of the range of r. Had the parameter space been finite and with a number of elements less than or equal to n, it might be possible to solve the linear equations in g(t) obtained by substituting the values of r and get solutions different from 0. For example, if n = 1 and the parameter space is {0.5}, a single observation and a single parameter value, T is not complete. Observe that, with the definition:
g
(
t
)
=
2
(
t
−
0.5
)
,
{\displaystyle g(t)=2(t-0.5),\,}
then, E(g(T)) = 0 although g(t) is not 0 for t = 0 nor for t = 1.
= Example: Sum of normals
=This example will show that, in a sample X1, X2 of size 2 from a normal distribution with known variance, the statistic X1 + X2 is complete and sufficient. Suppose (X1, X2) are independent, identically distributed random variables, normally distributed with expectation θ and variance 1.
The sum
s
(
(
X
1
,
X
2
)
)
=
X
1
+
X
2
{\displaystyle s((X_{1},X_{2}))=X_{1}+X_{2}\,\!}
is a complete statistic for θ.
To show this, it is sufficient to demonstrate that there is no non-zero function
g
{\displaystyle g}
such that the expectation of
g
(
s
(
X
1
,
X
2
)
)
=
g
(
X
1
+
X
2
)
{\displaystyle g(s(X_{1},X_{2}))=g(X_{1}+X_{2})\,\!}
remains zero regardless of the value of θ.
That fact may be seen as follows. The probability distribution of X1 + X2 is normal with expectation 2θ and variance 2. Its probability density function in
x
{\displaystyle x}
is therefore proportional to
exp
(
−
(
x
−
2
θ
)
2
/
4
)
.
{\displaystyle \exp \left(-(x-2\theta )^{2}/4\right).}
The expectation of g above would therefore be a constant times
∫
−
∞
∞
g
(
x
)
exp
(
−
(
x
−
2
θ
)
2
/
4
)
d
x
.
{\displaystyle \int _{-\infty }^{\infty }g(x)\exp \left(-(x-2\theta )^{2}/4\right)\,dx.}
A bit of algebra reduces this to
k
(
θ
)
∫
−
∞
∞
h
(
x
)
e
x
θ
d
x
{\displaystyle k(\theta )\int _{-\infty }^{\infty }h(x)e^{x\theta }\,dx\,\!}
where k(θ) is nowhere zero and
h
(
x
)
=
g
(
x
)
e
−
x
2
/
4
.
{\displaystyle h(x)=g(x)e^{-x^{2}/4}.\,\!}
As a function of θ this is a two-sided Laplace transform of h(X), and cannot be identically zero unless h(x) is zero almost everywhere. The exponential is not zero, so this can only happen if g(x) is zero almost everywhere.
By contrast, the statistic
(
X
1
,
X
2
)
{\textstyle (X_{1},X_{2})}
is sufficient but not complete. It admits a non-zero unbiased estimator of zero, namely
X
1
−
X
2
.
{\textstyle X_{1}-X_{2}.}
Example: Location of a uniform distribution
Suppose
X
∼
Uniform
(
θ
−
1
,
θ
+
1
)
.
{\textstyle X\sim \operatorname {Uniform} (\theta -1,\theta +1).}
Then
E
(
sin
(
π
X
)
)
=
0
{\textstyle \operatorname {E} (\sin(\pi X))=0}
regardless of the value of
θ
.
{\textstyle \theta .}
Thus
sin
(
π
X
)
{\textstyle \sin(\pi X)}
is not complete.
Relation to sufficient statistics
For some parametric families, a complete sufficient statistic does not exist (for example, see Galili and Meilijson 2016 ).
For example, if you take a sample sized n > 2 from a N(θ,θ2) distribution, then
(
∑
i
=
1
n
X
i
,
∑
i
=
1
n
X
i
2
)
{\displaystyle \left(\sum _{i=1}^{n}X_{i},\sum _{i=1}^{n}X_{i}^{2}\right)}
is a minimal sufficient statistic and is a function of any other minimal sufficient statistic, but
2
(
∑
i
=
1
n
X
i
)
2
−
(
n
+
1
)
∑
i
=
1
n
X
i
2
{\displaystyle 2\left(\sum _{i=1}^{n}X_{i}\right)^{2}-(n+1)\sum _{i=1}^{n}X_{i}^{2}}
has an expectation of 0 for all θ, so there cannot be a complete statistic.
If there is a minimal sufficient statistic then any complete sufficient statistic is also minimal sufficient. But there are pathological cases where a minimal sufficient statistic does not exist even if a complete statistic does.
Importance of completeness
The notion of completeness has many applications in statistics, particularly in the following two theorems of mathematical statistics.
= Lehmann–Scheffé theorem
=Completeness occurs in the Lehmann–Scheffé theorem,
which states that if a statistic that is unbiased, complete and sufficient for some parameter θ, then it is the best mean-unbiased estimator for θ. In other words, this statistic has a smaller expected loss for any convex loss function; in many practical applications with the squared loss-function, it has a smaller mean squared error among any estimators with the same expected value.
Examples exists that when the minimal sufficient statistic is not complete then several alternative statistics exist for unbiased estimation of θ, while some of them have lower variance than others.
See also minimum-variance unbiased estimator.
= Basu's theorem
=Bounded completeness occurs in Basu's theorem, which states that a statistic that is both boundedly complete and sufficient is independent of any ancillary statistic.
= Bahadur's theorem
=Bounded completeness also occurs in Bahadur's theorem. In the case where there exists at least one minimal sufficient statistic, a statistic which is sufficient and boundedly complete, is necessarily minimal sufficient.
Another form of Bahadur's theorem states that any sufficient and boundedly complete statistic over a finite-dimensional coordinate space is also minimal sufficient.
Notes
References
Basu, D. (1988). J. K. Ghosh (ed.). Statistical information and likelihood : A collection of critical essays by Dr. D. Basu. Lecture Notes in Statistics. Vol. 45. Springer. ISBN 978-0-387-96751-6. MR 0953081.
Bickel, Peter J.; Doksum, Kjell A. (2001). Mathematical statistics, Volume 1: Basic and selected topics (Second (updated printing 2007) of the Holden-Day 1976 ed.). Pearson Prentice–Hall. ISBN 978-0-13-850363-5. MR 0443141.
E. L., Lehmann; Romano, Joseph P. (2005). Testing statistical hypotheses. Springer Texts in Statistics (Third ed.). New York: Springer. pp. xiv+784. ISBN 978-0-387-98864-1. MR 2135927. Archived from the original on 2013-02-02.
Lehmann, E.L.; Scheffé, H. (1950). "Completeness, similar regions, and unbiased estimation. I." Sankhyā: the Indian Journal of Statistics. 10 (4): 305–340. doi:10.1007/978-1-4614-1412-4_23. JSTOR 25048038. MR 0039201.
Lehmann, E.L.; Scheffé, H. (1955). "Completeness, similar regions, and unbiased estimation. II". Sankhyā: The Indian Journal of Statistics. 15 (3): 219–236. doi:10.1007/978-1-4614-1412-4_24. JSTOR 25048243. MR 0072410.
Kata Kunci Pencarian:
- Statistika
- Statistika matematika
- Ilmu aktuaria
- Model Ising
- Variabel acak
- Model generatif
- Efek pengacau
- Eksperimen semu
- Completeness (statistics)
- Completeness
- Statistics
- Outline of statistics
- List of statistics articles
- Lehmann–Scheffé theorem
- Preference
- Dreadnoughtus
- Estimation theory
- Rape statistics