- Source: Probability integral transform
In probability theory, the probability integral transform (also known as universality of the uniform) relates to the result that data values that are modeled as being random variables from any given continuous distribution can be converted to random variables having a standard uniform distribution. This holds exactly provided that the distribution being used is the true distribution of the random variables; if the distribution is one fitted to the data, the result will hold approximately in large samples.
The result is sometimes modified or extended so that the result of the transformation is a standard distribution other than the uniform distribution, such as the exponential distribution.
The transform was introduced by Ronald Fisher in his 1932 edition of the book Statistical Methods for Research Workers.
Applications
One use for the probability integral transform in statistical data analysis is to provide the basis for testing whether a set of observations can reasonably be modelled as arising from a specified distribution. Specifically, the probability integral transform is applied to construct an equivalent set of values, and a test is then made of whether a uniform distribution is appropriate for the constructed dataset. Examples of this are P–P plots and Kolmogorov–Smirnov tests.
A second use for the transformation is in the theory related to copulas which are a means of both defining and working with distributions for statistically dependent multivariate data. Here the problem of defining or manipulating a joint probability distribution for a set of random variables is simplified or reduced in apparent complexity by applying the probability integral transform to each of the components and then working with a joint distribution for which the marginal variables have uniform distributions.
A third use is based on applying the inverse of the probability integral transform to convert random variables from a uniform distribution to have a selected distribution: this is known as inverse transform sampling.
Statement
Suppose that a random variable
X
{\displaystyle X}
has a continuous distribution for which the cumulative distribution function (CDF) is
F
X
.
{\displaystyle F_{X}.}
Then the random variable
Y
{\displaystyle Y}
defined as
Y
:=
F
X
(
X
)
,
{\displaystyle Y:=F_{X}(X)\,,}
has a standard uniform distribution.
Equivalently, if
μ
{\displaystyle \mu }
is the uniform measure on
[
0
,
1
]
{\displaystyle [0,1]}
, the distribution of
X
{\displaystyle X}
on
R
{\displaystyle \mathbb {R} }
is the pushforward measure
μ
∘
F
X
−
1
{\displaystyle \mu \circ F_{X}^{-1}}
.
Proof
Given any random continuous variable
X
{\displaystyle X}
, define
Y
=
F
X
(
X
)
{\displaystyle Y=F_{X}(X)}
. Given
y
∈
[
0
,
1
]
{\displaystyle y\in [0,1]}
, if
F
X
−
1
(
y
)
{\displaystyle F_{X}^{-1}(y)}
exists (i.e., if there exists a unique
x
{\displaystyle x}
such that
F
X
(
x
)
=
y
{\displaystyle F_{X}(x)=y}
), then:
F
Y
(
y
)
=
P
(
Y
≤
y
)
=
P
(
F
X
(
X
)
≤
y
)
=
P
(
X
≤
F
X
−
1
(
y
)
)
=
F
X
(
F
X
−
1
(
y
)
)
=
y
{\displaystyle {\begin{aligned}F_{Y}(y)&=\operatorname {P} (Y\leq y)\\&=\operatorname {P} (F_{X}(X)\leq y)\\&=\operatorname {P} (X\leq F_{X}^{-1}(y))\\&=F_{X}(F_{X}^{-1}(y))\\&=y\end{aligned}}}
If
F
X
−
1
(
y
)
{\displaystyle F_{X}^{-1}(y)}
does not exist, then it can be replaced in this proof by the function
χ
{\displaystyle \chi }
, where we define
χ
(
0
)
=
−
∞
{\displaystyle \chi (0)=-\infty }
,
χ
(
1
)
=
∞
{\displaystyle \chi (1)=\infty }
, and
χ
(
y
)
≡
inf
{
x
:
F
X
(
x
)
≥
y
}
{\displaystyle \chi (y)\equiv \inf\{x:F_{X}(x)\geq y\}}
for
y
∈
(
0
,
1
)
{\displaystyle y\in (0,1)}
, with the same result that
F
Y
(
y
)
=
y
{\displaystyle F_{Y}(y)=y}
. Thus,
F
Y
{\displaystyle F_{Y}}
is just the CDF of a
U
n
i
f
o
r
m
(
0
,
1
)
{\displaystyle \mathrm {Uniform} (0,1)}
random variable, so that
Y
{\displaystyle Y}
has a uniform distribution on the interval
[
0
,
1
]
{\displaystyle [0,1]}
.
Examples
For a first, illustrative example, let
X
{\displaystyle X}
be a random variable with a standard normal distribution
N
(
0
,
1
)
{\displaystyle {\mathcal {N}}(0,1)}
. Then its CDF is
Φ
(
x
)
=
1
2
π
∫
−
∞
x
e
−
t
2
/
2
d
t
=
1
2
[
1
+
erf
(
x
2
)
]
,
x
∈
R
,
{\displaystyle \Phi (x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}{\rm {e}}^{-t^{2}/2}\,{\rm {d}}t={\frac {1}{2}}{\Big [}\,1+\operatorname {erf} {\Big (}{\frac {x}{\sqrt {2}}}{\Big )}\,{\Big ]},\quad x\in \mathbb {R} ,\,}
where
erf
(
)
,
{\displaystyle \operatorname {erf} (),}
is the error function. Then the new random variable
Y
,
{\displaystyle Y,}
defined by
Y
:=
Φ
(
X
)
,
{\displaystyle Y:=\Phi (X),}
is uniformly distributed.
As second example, if
X
{\displaystyle X}
has an exponential distribution with unit mean, then its CDF is
F
(
x
)
=
1
−
exp
(
−
x
)
,
{\displaystyle F(x)=1-\exp(-x),}
and the immediate result of the probability integral transform is that
Y
=
1
−
exp
(
−
X
)
{\displaystyle Y=1-\exp(-X)}
has a uniform distribution. Moreover, by symmetry of the uniform distribution,
Z
=
exp
(
−
X
)
{\displaystyle Z=\exp(-X)}
also has a uniform distribution.
See also
Inverse transform sampling
References
Kata Kunci Pencarian:
- Transformasi Laplace
- Pierre-Simon de Laplace
- Pi
- Probability integral transform
- Integral transform
- Inverse transform sampling
- Laplace transform
- Fourier transform
- Laplace–Stieltjes transform
- Radon transform
- Integral geometry
- List of statistics articles
- Mellin transform