• Source: Indecomposable distribution
  • In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable: Z = X + Y. If, further, it can be expressed as the distribution of the sum of two or more independent identically distributed random variables, then it is divisible: Z = X1 + X2.


    Examples




    = Indecomposable

    =
    The simplest examples are Bernoulli-distributions: if




    X
    =


    {



    1



    with probability

    p
    ,




    0



    with probability

    1

    p
    ,








    {\displaystyle X={\begin{cases}1&{\text{with probability }}p,\\0&{\text{with probability }}1-p,\end{cases}}}


    then the probability distribution of X is indecomposable.
    Proof: Given non-constant distributions U and V, so that U assumes at least two values a, b and V assumes two values c, d, with a < b and c < d, then U + V assumes at least three distinct values: a + c, a + d, b + d (b + c may be equal to a + d, for example if one uses 0, 1 and 0, 1). Thus the sum of non-constant distributions assumes at least three values, so the Bernoulli distribution is not the sum of non-constant distributions.
    Suppose a + b + c = 1, a, b, c ≥ 0, and




    X
    =


    {



    2



    with probability

    a
    ,




    1



    with probability

    b
    ,




    0



    with probability

    c
    .








    {\displaystyle X={\begin{cases}2&{\text{with probability }}a,\\1&{\text{with probability }}b,\\0&{\text{with probability }}c.\end{cases}}}


    This probability distribution is decomposable (as the distribution of the sum of two Bernoulli-distributed random variables) if






    a


    +


    c



    1



    {\displaystyle {\sqrt {a}}+{\sqrt {c}}\leq 1\ }


    and otherwise indecomposable. To see, this, suppose U and V are independent random variables and U + V has this probability distribution. Then we must have








    U
    =


    {



    1



    with probability

    p
    ,




    0



    with probability

    1

    p
    ,










    and




    V
    =


    {



    1



    with probability

    q
    ,




    0



    with probability

    1

    q
    ,












    {\displaystyle {\begin{matrix}U={\begin{cases}1&{\text{with probability }}p,\\0&{\text{with probability }}1-p,\end{cases}}&{\mbox{and}}&V={\begin{cases}1&{\text{with probability }}q,\\0&{\text{with probability }}1-q,\end{cases}}\end{matrix}}}


    for some p, q ∈ [0, 1], by similar reasoning to the Bernoulli case (otherwise the sum U + V will assume more than three values). It follows that




    a
    =
    p
    q
    ,



    {\displaystyle a=pq,\,}





    c
    =
    (
    1

    p
    )
    (
    1

    q
    )
    ,



    {\displaystyle c=(1-p)(1-q),\,}





    b
    =
    1

    a

    c
    .



    {\displaystyle b=1-a-c.\,}


    This system of two quadratic equations in two variables p and q has a solution (p, q) ∈ [0, 1]2 if and only if






    a


    +


    c



    1.



    {\displaystyle {\sqrt {a}}+{\sqrt {c}}\leq 1.\ }


    Thus, for example, the discrete uniform distribution on the set {0, 1, 2} is indecomposable, but the binomial distribution for two trials each having probabilities 1/2, thus giving respective probabilities a, b, c as 1/4, 1/2, 1/4, is decomposable.
    An absolutely continuous indecomposable distribution. It can be shown that the distribution whose density function is




    f
    (
    x
    )
    =


    1


    2
    π






    x

    2



    e



    x

    2



    /

    2




    {\displaystyle f(x)={1 \over {\sqrt {2\pi \,}}}x^{2}e^{-x^{2}/2}}


    is indecomposable.


    = Decomposable

    =
    All infinitely divisible distributions are a fortiori decomposable; in particular, this includes the stable distributions, such as the normal distribution.
    The uniform distribution on the interval [0, 1] is decomposable, since it is the sum of the Bernoulli variable that assumes 0 or 1/2 with equal probabilities and the uniform distribution on [0, 1/2]. Iterating this yields the infinite decomposition:







    n
    =
    1








    X

    n



    2

    n




    ,


    {\displaystyle \sum _{n=1}^{\infty }{X_{n} \over 2^{n}},}


    where the independent random variables Xn are each equal to 0 or 1 with equal probabilities – this is a Bernoulli trial of each digit of the binary expansion.
    A sum of indecomposable random variables is decomposable into the original summands. But it may turn out to be infinitely divisible. Suppose a random variable Y has a geometric distribution




    Pr
    (
    Y
    =
    n
    )
    =
    (
    1

    p

    )

    n


    p



    {\displaystyle \Pr(Y=n)=(1-p)^{n}p\,}


    on {0, 1, 2, ...}.
    For any positive integer k, there is a sequence of negative-binomially distributed random variables Yj, j = 1, ..., k, such that Y1 + ... + Yk has this geometric distribution. Therefore, this distribution is infinitely divisible.
    On the other hand, let Dn be the nth binary digit of Y, for n ≥ 0. Then the Dn's are independent and




    Y
    =



    n
    =
    1






    2

    n



    D

    n


    ,


    {\displaystyle Y=\sum _{n=1}^{\infty }2^{n}D_{n},}


    and each term in this sum is indecomposable.


    Related concepts


    At the other extreme from indecomposability is infinite divisibility.

    Cramér's theorem shows that while the normal distribution is infinitely divisible, it can only be decomposed into normal distributions.
    Cochran's theorem shows that the terms in a decomposition of a sum of squares of normal random variables into sums of squares of linear combinations of these variables always have independent chi-squared distributions.


    See also


    Cramér's theorem
    Cochran's theorem
    Infinite divisibility (probability)
    Khinchin's theorem on the factorization of distributions


    References


    Linnik, Yu. V. and Ostrovskii, I. V. Decomposition of random variables and vectors, Amer. Math. Soc., Providence RI, 1977.
    Lukacs, Eugene, Characteristic Functions, New York, Hafner Publishing Company, 1970.

Kata Kunci Pencarian: