- Source: Characterization of probability distributions
In mathematics in general, a characterization theorem says that a particular object – a function, a space, etc. – is the only one that possesses properties specified in the theorem. A characterization of a probability distribution accordingly states that it is the only probability distribution that satisfies specified conditions. More precisely, the model of characterization of
probability distribution was described by V.M. Zolotarev in such manner. On the probability space we define the space
X
=
{
X
}
{\displaystyle {\mathcal {X}}=\{X\}}
of random variables with values in measurable metric space
(
U
,
d
u
)
{\displaystyle (U,d_{u})}
and the space
Y
=
{
Y
}
{\displaystyle {\mathcal {Y}}=\{Y\}}
of random variables with values in measurable metric space
(
V
,
d
v
)
{\displaystyle (V,d_{v})}
. By characterizations of probability distributions we understand general problems of description of some set
C
{\displaystyle {\mathcal {C}}}
in the space
X
{\displaystyle {\mathcal {X}}}
by extracting the sets
A
⊆
X
{\displaystyle {\mathcal {A}}\subseteq {\mathcal {X}}}
and
B
⊆
Y
{\displaystyle {\mathcal {B}}\subseteq {\mathcal {Y}}}
which describe the properties of random variables
X
∈
A
{\displaystyle X\in {\mathcal {A}}}
and their images
Y
=
F
X
∈
B
{\displaystyle Y=\mathbf {F} X\in {\mathcal {B}}}
, obtained by means of a specially chosen mapping
F
:
X
→
Y
{\displaystyle \mathbf {F} :{\mathcal {X}}\to {\mathcal {Y}}}
.
The description of the properties of the random variables
X
{\displaystyle X}
and of their images
Y
=
F
X
{\displaystyle Y=\mathbf {F} X}
is equivalent to the indication of the set
A
⊆
X
{\displaystyle {\mathcal {A}}\subseteq {\mathcal {X}}}
from which
X
{\displaystyle X}
must be taken and of the set
B
⊆
Y
{\displaystyle {\mathcal {B}}\subseteq {\mathcal {Y}}}
into which its image must fall. So, the set which interests us appears therefore in the following form:
X
∈
A
,
F
X
∈
B
⇔
X
∈
C
,
i
.
e
.
C
=
F
−
1
B
,
{\displaystyle X\in {\mathcal {A}},\mathbf {F} X\in {\mathcal {B}}\Leftrightarrow X\in {\mathcal {C}},i.e.{\mathcal {C}}=\mathbf {F} ^{-1}{\mathcal {B}},}
where
F
−
1
B
{\displaystyle \mathbf {F} ^{-1}{\mathcal {B}}}
denotes the complete inverse image of
B
{\displaystyle {\mathcal {B}}}
in
A
{\displaystyle {\mathcal {A}}}
. This is the general model of characterization of probability distribution. Some examples of characterization theorems:
The assumption that two linear (or non-linear) statistics are identically distributed (or independent, or have a constancy regression and so on) can be used to characterize various populations. For example, according to George Pólya's characterization theorem, if
X
1
{\displaystyle X_{1}}
and
X
2
{\displaystyle X_{2}}
are independent identically distributed random variables with finite variance, then the statistics
S
1
=
X
1
{\displaystyle S_{1}=X_{1}}
and
S
2
=
X
1
+
X
2
2
{\displaystyle S_{2}={\cfrac {X_{1}+X_{2}}{\sqrt {2}}}}
are identically distributed if and only if
X
1
{\displaystyle X_{1}}
and
X
2
{\displaystyle X_{2}}
have a normal distribution with zero mean. In this case
F
=
[
1
0
1
/
2
1
/
2
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&0\\1/{\sqrt {2}}&1/{\sqrt {2}}\end{bmatrix}}}
,
A
{\displaystyle {\mathcal {A}}}
is a set of random two-dimensional column-vectors with independent identically distributed components,
B
{\displaystyle {\mathcal {B}}}
is a set of random two-dimensional column-vectors with identically distributed components and
C
{\displaystyle {\mathcal {C}}}
is a set of two-dimensional column-vectors with independent identically distributed normal components.
According to generalized George Pólya's characterization theorem (without condition on finiteness of variance ) if
X
1
,
X
2
,
…
,
X
n
{\displaystyle X_{1},X_{2},\dots ,X_{n}}
are non-degenerate independent identically distributed random variables, statistics
X
1
{\displaystyle X_{1}}
and
a
1
X
1
+
a
2
X
2
+
⋯
+
a
n
X
n
{\displaystyle a_{1}X_{1}+a_{2}X_{2}+\dots +a_{n}X_{n}}
are identically distributed and
|
a
j
|
<
1
,
a
1
2
+
a
2
2
+
⋯
+
a
n
2
=
1
{\displaystyle \left|a_{j}\right\vert <1,a_{1}^{2}+a_{2}^{2}+\dots +a_{n}^{2}=1}
, then
X
j
{\displaystyle X_{j}}
is normal random variable for any
j
,
j
=
1
,
2
,
…
,
n
{\displaystyle j,j=1,2,\dots ,n}
. In this case
F
=
[
1
0
…
0
a
1
a
2
…
a
n
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&0&\dots &0\\a_{1}&a_{2}&\dots &a_{n}\end{bmatrix}}}
,
A
{\displaystyle {\mathcal {A}}}
is a set of random n-dimensional column-vectors with independent identically distributed components,
B
{\displaystyle {\mathcal {B}}}
is a set of random two-dimensional column-vectors with identically distributed components and
C
{\displaystyle {\mathcal {C}}}
is a set of n-dimensional column-vectors with independent identically distributed normal components.
All probability distributions on the half-line
[
0
,
∞
)
{\displaystyle \left[0,\infty \right)}
that are memoryless are exponential distributions. "Memoryless" means that if
X
{\displaystyle X}
is a random variable with such a distribution, then for any numbers
0
<
y
<
x
{\displaystyle 0
,
Pr
(
X
>
x
∣
X
>
y
)
=
Pr
(
X
>
x
−
y
)
{\displaystyle \Pr(X>x\mid X>y)=\Pr(X>x-y)}
.
Verification of conditions of characterization theorems in practice is possible only with some error
ϵ
{\displaystyle \epsilon }
, i.e., only to a certain degree of accuracy. Such a situation is observed, for instance, in the cases where a sample of finite size is considered. That is why there arises the following natural question. Suppose that the conditions of the characterization theorem are fulfilled not exactly but only approximately. May we assert that the conclusion of the theorem is also fulfilled approximately? The theorems in which the problems of this kind are considered are called stability characterizations of probability distributions.
See also
Characterization (mathematics)
References
Kata Kunci Pencarian:
- Characterization of probability distributions
- Probability density function
- Memorylessness
- Characterization (mathematics)
- Normal distribution
- Poisson distribution
- Student's t-distribution
- List of convolutions of probability distributions
- Gamma distribution
- Symmetric probability distribution