generalized inverse gaussian distribution

    Generalized inverse Gaussian distribution GudangMovies21 Rebahinxxi LK21

    In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function




    f
    (
    x
    )
    =



    (
    a

    /

    b

    )

    p

    /

    2




    2

    K

    p


    (


    a
    b


    )




    x

    (
    p

    1
    )



    e


    (
    a
    x
    +
    b

    /

    x
    )

    /

    2


    ,

    x
    >
    0
    ,


    {\displaystyle f(x)={\frac {(a/b)^{p/2}}{2K_{p}({\sqrt {ab}})}}x^{(p-1)}e^{-(ax+b/x)/2},\qquad x>0,}


    where Kp is a modified Bessel function of the second kind, a > 0, b > 0 and p a real parameter. It is used extensively in geostatistics, statistical linguistics, finance, etc. This distribution was first proposed by Étienne Halphen.
    It was rediscovered and popularised by Ole Barndorff-Nielsen, who called it the generalized inverse Gaussian distribution. Its statistical properties are discussed in Bent Jørgensen's lecture notes.


    Properties




    = Alternative parametrization

    =
    By setting



    θ
    =


    a
    b




    {\displaystyle \theta ={\sqrt {ab}}}

    and



    η
    =


    b

    /

    a




    {\displaystyle \eta ={\sqrt {b/a}}}

    , we can alternatively express the GIG distribution as




    f
    (
    x
    )
    =


    1

    2
    η

    K

    p


    (
    θ
    )





    (


    x
    η


    )


    p

    1



    e


    θ
    (
    x

    /

    η
    +
    η

    /

    x
    )

    /

    2


    ,


    {\displaystyle f(x)={\frac {1}{2\eta K_{p}(\theta )}}\left({\frac {x}{\eta }}\right)^{p-1}e^{-\theta (x/\eta +\eta /x)/2},}


    where



    θ


    {\displaystyle \theta }

    is the concentration parameter while



    η


    {\displaystyle \eta }

    is the scaling parameter.


    = Summation

    =
    Barndorff-Nielsen and Halgreen proved that the GIG distribution is infinitely divisible.


    = Entropy

    =
    The entropy of the generalized inverse Gaussian distribution is given as








    H
    =


    1
    2


    log


    (


    b
    a


    )







    +
    log


    (

    2

    K

    p



    (


    a
    b


    )


    )


    (
    p

    1
    )




    [



    d

    d
    ν




    K

    ν



    (


    a
    b


    )


    ]


    ν
    =
    p




    K

    p



    (


    a
    b


    )













    +



    a
    b


    2

    K

    p



    (


    a
    b


    )





    (


    K

    p
    +
    1



    (


    a
    b


    )

    +

    K

    p

    1



    (


    a
    b


    )


    )







    {\displaystyle {\begin{aligned}H={\frac {1}{2}}\log \left({\frac {b}{a}}\right)&{}+\log \left(2K_{p}\left({\sqrt {ab}}\right)\right)-(p-1){\frac {\left[{\frac {d}{d\nu }}K_{\nu }\left({\sqrt {ab}}\right)\right]_{\nu =p}}{K_{p}\left({\sqrt {ab}}\right)}}\\&{}+{\frac {\sqrt {ab}}{2K_{p}\left({\sqrt {ab}}\right)}}\left(K_{p+1}\left({\sqrt {ab}}\right)+K_{p-1}\left({\sqrt {ab}}\right)\right)\end{aligned}}}


    where





    [



    d

    d
    ν




    K

    ν



    (


    a
    b


    )


    ]


    ν
    =
    p




    {\displaystyle \left[{\frac {d}{d\nu }}K_{\nu }\left({\sqrt {ab}}\right)\right]_{\nu =p}}

    is a derivative of the modified Bessel function of the second kind with respect to the order



    ν


    {\displaystyle \nu }

    evaluated at



    ν
    =
    p


    {\displaystyle \nu =p}



    = Characteristic Function

    =
    The characteristic of a random variable



    X

    G
    I
    G
    (
    p
    ,
    a
    ,
    b
    )


    {\displaystyle X\sim GIG(p,a,b)}

    is given as(for a derivation of the characteristic function, see supplementary materials of )




    E
    (

    e

    i
    t
    X


    )
    =


    (


    a

    a

    2
    i
    t



    )



    p
    2







    K

    p



    (


    (
    a

    2
    i
    t
    )
    b


    )




    K

    p



    (


    a
    b


    )






    {\displaystyle E(e^{itX})=\left({\frac {a}{a-2it}}\right)^{\frac {p}{2}}{\frac {K_{p}\left({\sqrt {(a-2it)b}}\right)}{K_{p}\left({\sqrt {ab}}\right)}}}


    for



    t


    R



    {\displaystyle t\in \mathbb {R} }

    where



    i


    {\displaystyle i}

    denotes the imaginary number.


    Related distributions




    = Special cases

    =
    The inverse Gaussian and gamma distributions are special cases of the generalized inverse Gaussian distribution for p = −1/2 and b = 0, respectively. Specifically, an inverse Gaussian distribution of the form




    f
    (
    x
    ;
    μ
    ,
    λ
    )
    =


    [


    λ

    2
    π

    x

    3





    ]


    1

    /

    2


    exp



    (




    λ
    (
    x

    μ

    )

    2




    2

    μ

    2


    x



    )




    {\displaystyle f(x;\mu ,\lambda )=\left[{\frac {\lambda }{2\pi x^{3}}}\right]^{1/2}\exp {\left({\frac {-\lambda (x-\mu )^{2}}{2\mu ^{2}x}}\right)}}


    is a GIG with



    a
    =
    λ

    /


    μ

    2




    {\displaystyle a=\lambda /\mu ^{2}}

    ,



    b
    =
    λ


    {\displaystyle b=\lambda }

    , and



    p
    =

    1

    /

    2


    {\displaystyle p=-1/2}

    . A Gamma distribution of the form




    g
    (
    x
    ;
    α
    ,
    β
    )
    =

    β

    α




    1

    Γ
    (
    α
    )




    x

    α

    1



    e


    β
    x




    {\displaystyle g(x;\alpha ,\beta )=\beta ^{\alpha }{\frac {1}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}}


    is a GIG with



    a
    =
    2
    β


    {\displaystyle a=2\beta }

    ,



    b
    =
    0


    {\displaystyle b=0}

    , and



    p
    =
    α


    {\displaystyle p=\alpha }

    .
    Other special cases include the inverse-gamma distribution, for a = 0.


    = Conjugate prior for Gaussian

    =
    The GIG distribution is conjugate to the normal distribution when serving as the mixing distribution in a normal variance-mean mixture. Let the prior distribution for some hidden variable, say



    z


    {\displaystyle z}

    , be GIG:




    P
    (
    z

    a
    ,
    b
    ,
    p
    )
    =
    GIG

    (
    z

    a
    ,
    b
    ,
    p
    )


    {\displaystyle P(z\mid a,b,p)=\operatorname {GIG} (z\mid a,b,p)}


    and let there be



    T


    {\displaystyle T}

    observed data points,



    X
    =

    x

    1


    ,

    ,

    x

    T




    {\displaystyle X=x_{1},\ldots ,x_{T}}

    , with normal likelihood function, conditioned on



    z
    :


    {\displaystyle z:}





    P
    (
    X

    z
    ,
    α
    ,
    β
    )
    =



    i
    =
    1


    T


    N
    (

    x

    i



    α
    +
    β
    z
    ,
    z
    )


    {\displaystyle P(X\mid z,\alpha ,\beta )=\prod _{i=1}^{T}N(x_{i}\mid \alpha +\beta z,z)}


    where



    N
    (
    x

    μ
    ,
    v
    )


    {\displaystyle N(x\mid \mu ,v)}

    is the normal distribution, with mean



    μ


    {\displaystyle \mu }

    and variance



    v


    {\displaystyle v}

    . Then the posterior for



    z


    {\displaystyle z}

    , given the data is also GIG:




    P
    (
    z

    X
    ,
    a
    ,
    b
    ,
    p
    ,
    α
    ,
    β
    )
    =

    GIG


    (

    z

    a
    +
    T

    β

    2


    ,
    b
    +
    S
    ,
    p



    T
    2



    )



    {\displaystyle P(z\mid X,a,b,p,\alpha ,\beta )={\text{GIG}}\left(z\mid a+T\beta ^{2},b+S,p-{\frac {T}{2}}\right)}


    where




    S
    =



    i
    =
    1


    T


    (

    x

    i



    α

    )

    2





    {\displaystyle \textstyle S=\sum _{i=1}^{T}(x_{i}-\alpha )^{2}}

    .


    = Sichel distribution

    =
    The Sichel distribution results when the GIG is used as the mixing distribution for the Poisson parameter



    λ


    {\displaystyle \lambda }

    .


    Notes




    References




    See also


    Inverse Gaussian distribution
    Gamma distribution

Kata Kunci Pencarian: generalized inverse gaussian distribution

generalized inverse gaussian distributiongeneralized inverse gaussian distribution in rstatistical properties of the generalized inverse gaussian distributiongeneralized inverse gaussian gig distribution