- Source: Generalized logistic distribution
The term generalized logistic distribution is used as the name for several different families of probability distributions. For example, Johnson et al. list four forms, which are listed below.
Type I has also been called the skew-logistic distribution. Type IV subsumes the other types and is obtained when applying the logit transform to beta random variates. Following the same convention as for the log-normal distribution, type IV may be referred to as the logistic-beta distribution, with reference to the standard logistic function, which is the inverse of the logit transform.
For other families of distributions that have also been called generalized logistic distributions, see the shifted log-logistic distribution, which is a generalization of the log-logistic distribution; and the metalog ("meta-logistic") distribution, which is highly shape-and-bounds flexible and can be fit to data with linear least squares.
Definitions
The following definitions are for standardized versions of the families, which can be expanded to the full form as a location-scale family. Each is defined using either the cumulative distribution function (F) or the probability density function (ƒ), and is defined on (-∞,∞).
= Type I
=F
(
x
;
α
)
=
1
(
1
+
e
−
x
)
α
≡
(
1
+
e
−
x
)
−
α
,
α
>
0.
{\displaystyle F(x;\alpha )={\frac {1}{(1+e^{-x})^{\alpha }}}\equiv (1+e^{-x})^{-\alpha },\quad \alpha >0.}
The corresponding probability density function is:
f
(
x
;
α
)
=
α
e
−
x
(
1
+
e
−
x
)
α
+
1
,
α
>
0.
{\displaystyle f(x;\alpha )={\frac {\alpha e^{-x}}{\left(1+e^{-x}\right)^{\alpha +1}}},\quad \alpha >0.}
This type has also been called the "skew-logistic" distribution.
= Type II
=F
(
x
;
α
)
=
1
−
e
−
α
x
(
1
+
e
−
x
)
α
,
α
>
0.
{\displaystyle F(x;\alpha )=1-{\frac {e^{-\alpha x}}{(1+e^{-x})^{\alpha }}},\quad \alpha >0.}
The corresponding probability density function is:
f
(
x
;
α
)
=
α
e
−
α
x
(
1
+
e
−
x
)
α
+
1
,
α
>
0.
{\displaystyle f(x;\alpha )={\frac {\alpha e^{-\alpha x}}{(1+e^{-x})^{\alpha +1}}},\quad \alpha >0.}
= Type III
=f
(
x
;
α
)
=
1
B
(
α
,
α
)
e
−
α
x
(
1
+
e
−
x
)
2
α
,
α
>
0.
{\displaystyle f(x;\alpha )={\frac {1}{B(\alpha ,\alpha )}}{\frac {e^{-\alpha x}}{(1+e^{-x})^{2\alpha }}},\quad \alpha >0.}
Here B is the beta function. The moment generating function for this type is
M
(
t
)
=
Γ
(
α
−
t
)
Γ
(
α
+
t
)
(
Γ
(
α
)
)
2
,
−
α
<
t
<
α
.
{\displaystyle M(t)={\frac {\Gamma (\alpha -t)\Gamma (\alpha +t)}{(\Gamma (\alpha ))^{2}}},\quad -\alpha
The corresponding cumulative distribution function is:
F
(
x
;
α
)
=
(
e
x
+
1
)
Γ
(
α
)
e
α
(
−
x
)
(
e
−
x
+
1
)
−
2
α
2
F
~
1
(
1
,
1
−
α
;
α
+
1
;
−
e
x
)
B
(
α
,
α
)
,
α
>
0.
{\displaystyle F(x;\alpha )={\frac {\left(e^{x}+1\right)\Gamma (\alpha )e^{\alpha (-x)}\left(e^{-x}+1\right)^{-2\alpha }\,_{2}{\tilde {F}}_{1}\left(1,1-\alpha ;\alpha +1;-e^{x}\right)}{B(\alpha ,\alpha )}},\quad \alpha >0.}
= Type IV
=f
(
x
;
α
,
β
)
=
1
B
(
α
,
β
)
e
−
β
x
(
1
+
e
−
x
)
α
+
β
,
α
,
β
>
0
=
σ
(
x
)
α
σ
(
−
x
)
β
B
(
α
,
β
)
.
{\displaystyle {\begin{aligned}f(x;\alpha ,\beta )&={\frac {1}{B(\alpha ,\beta )}}{\frac {e^{-\beta x}}{(1+e^{-x})^{\alpha +\beta }}},\quad \alpha ,\beta >0\\[4pt]&={\frac {\sigma (x)^{\alpha }\sigma (-x)^{\beta }}{B(\alpha ,\beta )}}.\end{aligned}}}
Where, B is the beta function and
σ
(
x
)
=
1
/
(
1
+
e
−
x
)
{\displaystyle \sigma (x)=1/(1+e^{-x})}
is the standard logistic function. The moment generating function for this type is
M
(
t
)
=
Γ
(
β
−
t
)
Γ
(
α
+
t
)
Γ
(
α
)
Γ
(
β
)
,
−
α
<
t
<
β
.
{\displaystyle M(t)={\frac {\Gamma (\beta -t)\Gamma (\alpha +t)}{\Gamma (\alpha )\Gamma (\beta )}},\quad -\alpha
This type is also called the "exponential generalized beta of the second type".
The corresponding cumulative distribution function is:
F
(
x
;
α
,
β
)
=
(
e
x
+
1
)
Γ
(
α
)
e
β
(
−
x
)
(
e
−
x
+
1
)
−
α
−
β
2
F
~
1
(
1
,
1
−
β
;
α
+
1
;
−
e
x
)
B
(
α
,
β
)
,
α
,
β
>
0.
{\displaystyle F(x;\alpha ,\beta )={\frac {\left(e^{x}+1\right)\Gamma (\alpha )e^{\beta (-x)}\left(e^{-x}+1\right)^{-\alpha -\beta }\,_{2}{\tilde {F}}_{1}\left(1,1-\beta ;\alpha +1;-e^{x}\right)}{B(\alpha ,\beta )}},\quad \alpha ,\beta >0.}
= Relationship between types
=Type IV is the most general form of the distribution. The Type III distribution can be obtained from Type IV by fixing
β
=
α
{\displaystyle \beta =\alpha }
. The Type II distribution can be obtained from Type IV by fixing
α
=
1
{\displaystyle \alpha =1}
(and renaming
β
{\displaystyle \beta }
to
α
{\displaystyle \alpha }
). The Type I distribution can be obtained from Type IV by fixing
β
=
1
{\displaystyle \beta =1}
. Fixing
α
=
β
=
1
{\displaystyle \alpha =\beta =1}
gives the standard logistic distribution.
Type IV (logistic-beta) properties
The Type IV generalized logistic, or logistic-beta distribution, with support
x
∈
R
{\displaystyle x\in \mathbb {R} }
and shape parameters
α
,
β
>
0
{\displaystyle \alpha ,\beta >0}
, has (as shown above) the probability density function (pdf):
f
(
x
;
α
,
β
)
=
1
B
(
α
,
β
)
e
−
β
x
(
1
+
e
−
x
)
α
+
β
=
σ
(
x
)
α
σ
(
−
x
)
β
B
(
α
,
β
)
,
{\displaystyle f(x;\alpha ,\beta )={\frac {1}{B(\alpha ,\beta )}}{\frac {e^{-\beta x}}{(1+e^{-x})^{\alpha +\beta }}}={\frac {\sigma (x)^{\alpha }\sigma (-x)^{\beta }}{B(\alpha ,\beta )}},}
where
σ
(
x
)
=
1
/
(
1
+
e
−
x
)
{\displaystyle \sigma (x)=1/(1+e^{-x})}
is the standard logistic function. The probability density functions for three different sets of shape parameters are shown in the plot, where the distributions have been scaled and shifted to give zero means and unity variances, in order to facilitate comparison of the shapes.
In what follows, the notation
B
σ
(
α
,
β
)
{\displaystyle B_{\sigma }(\alpha ,\beta )}
is used to denote the Type IV distribution.
= Relationship with Gamma Distribution
=This distribution can be obtained in terms of the gamma distribution as follows. Let
y
∼
Gamma
(
α
,
γ
)
{\displaystyle y\sim {\text{Gamma}}(\alpha ,\gamma )}
and independently,
z
∼
Gamma
(
β
,
γ
)
{\displaystyle z\sim {\text{Gamma}}(\beta ,\gamma )}
and let
x
=
ln
y
−
ln
z
{\displaystyle x=\ln y-\ln z}
. Then
x
∼
B
σ
(
α
,
β
)
{\displaystyle x\sim B_{\sigma }(\alpha ,\beta )}
.
= Symmetry
=If
x
∼
B
σ
(
α
,
β
)
{\displaystyle x\sim B_{\sigma }(\alpha ,\beta )}
, then
−
x
∼
B
σ
(
β
,
α
)
{\displaystyle -x\sim B_{\sigma }(\beta ,\alpha )}
.
= Mean and variance
=By using the logarithmic expectations of the gamma distribution, the mean and variance can be derived as:
E
[
x
]
=
ψ
(
α
)
−
ψ
(
β
)
var
[
x
]
=
ψ
′
(
α
)
+
ψ
′
(
β
)
{\displaystyle {\begin{aligned}{\text{E}}[x]&=\psi (\alpha )-\psi (\beta )\\{\text{var}}[x]&=\psi '(\alpha )+\psi '(\beta )\\\end{aligned}}}
where
ψ
{\displaystyle \psi }
is the digamma function, while
ψ
′
=
ψ
(
1
)
{\displaystyle \psi '=\psi ^{(1)}}
is its first derivative, also known as the trigamma function, or the first polygamma function. Since
ψ
{\displaystyle \psi }
is strictly increasing, the sign of the mean is the same as the sign of
α
−
β
{\displaystyle \alpha -\beta }
. Since
ψ
′
{\displaystyle \psi '}
is strictly decreasing, the shape parameters can also be interpreted as concentration parameters. Indeed, as shown below, the left and right tails respectively become thinner as
α
{\displaystyle \alpha }
or
β
{\displaystyle \beta }
are increased. The two terms of the variance represent the contributions to the variance of the left and right parts of the distribution.
= Cumulants and skewness
=The cumulant generating function is
K
(
t
)
=
ln
M
(
t
)
{\displaystyle K(t)=\ln M(t)}
, where the moment generating function
M
(
t
)
{\displaystyle M(t)}
is given above. The cumulants,
κ
n
{\displaystyle \kappa _{n}}
, are the
n
{\displaystyle n}
-th derivatives of
K
(
t
)
{\displaystyle K(t)}
, evaluated at
t
=
0
{\displaystyle t=0}
:
κ
n
=
K
(
n
)
(
0
)
=
ψ
(
n
−
1
)
(
α
)
+
(
−
1
)
n
ψ
(
n
−
1
)
(
β
)
{\displaystyle \kappa _{n}=K^{(n)}(0)=\psi ^{(n-1)}(\alpha )+(-1)^{n}\psi ^{(n-1)}(\beta )}
where
ψ
(
0
)
=
ψ
{\displaystyle \psi ^{(0)}=\psi }
and
ψ
(
n
−
1
)
{\displaystyle \psi ^{(n-1)}}
are the digamma and polygamma functions. In agreement with the derivation above, the first cumulant,
κ
1
{\displaystyle \kappa _{1}}
, is the mean and the second,
κ
2
{\displaystyle \kappa _{2}}
, is the variance.
The third cumulant,
κ
3
{\displaystyle \kappa _{3}}
, is the third central moment
E
[
(
x
−
E
[
x
]
)
3
]
{\displaystyle E[(x-E[x])^{3}]}
, which when scaled by the third power of the standard deviation gives the skewness:
skew
[
x
]
=
ψ
(
2
)
(
α
)
−
ψ
(
2
)
(
β
)
var
[
x
]
3
{\displaystyle {\text{skew}}[x]={\frac {\psi ^{(2)}(\alpha )-\psi ^{(2)}(\beta )}{{\sqrt {{\text{var}}[x]}}^{3}}}}
The sign (and therefore the handedness) of the skewness is the same as the sign of
α
−
β
{\displaystyle \alpha -\beta }
.
= Mode
=The mode (pdf maximum) can be derived by finding
x
{\displaystyle x}
where the log pdf derivative is zero:
d
d
x
ln
f
(
x
;
α
,
β
)
=
α
σ
(
−
x
)
−
β
σ
(
x
)
=
0
{\displaystyle {\frac {d}{dx}}\ln f(x;\alpha ,\beta )=\alpha \sigma (-x)-\beta \sigma (x)=0}
This simplifies to
α
/
β
=
e
x
{\displaystyle \alpha /\beta =e^{x}}
, so that:
mode
[
x
]
=
ln
α
β
{\displaystyle {\text{mode}}[x]=\ln {\frac {\alpha }{\beta }}}
= Tail behaviour
=In each of the left and right tails, one of the sigmoids in the pdf saturates to one, so that the tail is formed by the other sigmoid. For large negative
x
{\displaystyle x}
, the left tail of the pdf is proportional to
σ
(
x
)
α
≈
e
α
x
{\displaystyle \sigma (x)^{\alpha }\approx e^{\alpha x}}
, while the right tail (large positive
x
{\displaystyle x}
) is proportional to
σ
(
−
x
)
β
≈
e
−
β
x
{\displaystyle \sigma (-x)^{\beta }\approx e^{-\beta x}}
. This means the tails are independently controlled by
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
. Although type IV tails are heavier than those of the normal distribution (
e
−
x
2
2
v
{\displaystyle e^{-{\frac {x^{2}}{2v}}}}
, for variance
v
{\displaystyle v}
), the type IV means and variances remain finite for all
α
,
β
>
0
{\displaystyle \alpha ,\beta >0}
. This is in contrast with the Cauchy distribution for which the mean and variance do not exist. In the log pdf plots shown here, the type IV tails are linear, the normal distribution tails are quadratic and the Cauchy tails are logarithmic.
= Exponential family properties
=B
σ
(
α
,
β
)
{\displaystyle B_{\sigma }(\alpha ,\beta )}
forms an exponential family with natural parameters
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
and sufficient statistics
log
σ
(
x
)
{\displaystyle \log \sigma (x)}
and
log
σ
(
−
x
)
{\displaystyle \log \sigma (-x)}
. The expected values of the sufficient statistics can be found by differentiation of the log-normalizer:
E
[
log
σ
(
x
)
]
=
∂
log
B
(
α
,
β
)
∂
α
=
ψ
(
α
)
−
ψ
(
α
+
β
)
E
[
log
σ
(
−
x
)
]
=
∂
log
B
(
α
,
β
)
∂
β
=
ψ
(
β
)
−
ψ
(
α
+
β
)
{\displaystyle {\begin{aligned}E[\log \sigma (x)]&={\frac {\partial \log B(\alpha ,\beta )}{\partial \alpha }}=\psi (\alpha )-\psi (\alpha +\beta )\\E[\log \sigma (-x)]&={\frac {\partial \log B(\alpha ,\beta )}{\partial \beta }}=\psi (\beta )-\psi (\alpha +\beta )\\\end{aligned}}}
Given a data set
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
assumed to have been generated IID from
B
σ
(
α
,
β
)
{\displaystyle B_{\sigma }(\alpha ,\beta )}
, the maximum-likelihood parameter estimate is:
α
^
,
β
^
=
arg
max
α
,
β
1
n
∑
i
=
1
n
log
f
(
x
i
;
α
,
β
)
=
arg
max
α
,
β
α
(
1
n
∑
i
log
σ
(
x
i
)
)
+
β
(
1
n
∑
i
log
σ
(
−
x
i
)
)
−
log
B
(
α
,
β
)
=
arg
max
α
,
β
α
log
σ
(
x
)
¯
+
β
log
σ
(
−
x
)
¯
−
log
B
(
α
,
β
)
{\displaystyle {\begin{aligned}{\hat {\alpha }},{\hat {\beta }}=\arg \max _{\alpha ,\beta }&\;{\frac {1}{n}}\sum _{i=1}^{n}\log f(x_{i};\alpha ,\beta )\\=\arg \max _{\alpha ,\beta }&\;\alpha {\Bigl (}{\frac {1}{n}}\sum _{i}\log \sigma (x_{i}){\Bigr )}+\beta {\Bigl (}{\frac {1}{n}}\sum _{i}\log \sigma (-x_{i}){\Bigr )}-\log B(\alpha ,\beta )\\=\arg \max _{\alpha ,\beta }&\;\alpha \,{\overline {\log \sigma (x)}}+\beta \,{\overline {\log \sigma (-x)}}-\log B(\alpha ,\beta )\end{aligned}}}
where the overlines denote the averages of the sufficient statistics. The maximum-likelihood estimate depends on the data only via these average statistics. Indeed, at the maximum-likelihood estimate the expected values and averages agree:
ψ
(
α
^
)
−
ψ
(
α
^
+
β
^
)
=
log
σ
(
x
)
¯
ψ
(
β
^
)
−
ψ
(
α
^
+
β
^
)
=
log
σ
(
−
x
)
¯
{\displaystyle {\begin{aligned}\psi ({\hat {\alpha }})-\psi ({\hat {\alpha }}+{\hat {\beta }})&={\overline {\log \sigma (x)}}\\\psi ({\hat {\beta }})-\psi ({\hat {\alpha }}+{\hat {\beta }})&={\overline {\log \sigma (-x)}}\\\end{aligned}}}
which is also where the partial derivatives of the above maximand vanish.
= Relationships with other distributions
=Relationships with other distributions include:
The log-ratio of gamma variates is of type IV as detailed above.
If
y
∼
BetaPrime
(
α
,
β
)
{\displaystyle y\sim {\text{BetaPrime}}(\alpha ,\beta )}
, then
x
=
ln
y
{\displaystyle x=\ln y}
has a type IV distribution, with parameters
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
. See beta prime distribution.
If
z
∼
Gamma
(
β
,
1
)
{\displaystyle z\sim {\text{Gamma}}(\beta ,1)}
and
y
∣
z
∼
Gamma
(
α
,
z
)
{\displaystyle y\mid z\sim {\text{Gamma}}(\alpha ,z)}
, where
z
{\displaystyle z}
is used as the rate parameter of the second gamma distribution, then
y
{\displaystyle y}
has a compound gamma distribution, which is the same as
BetaPrime
(
α
,
β
)
{\displaystyle {\text{BetaPrime}}(\alpha ,\beta )}
, so that
x
=
ln
y
{\displaystyle x=\ln y}
has a type IV distribution.
If
p
∼
Beta
(
α
,
β
)
{\displaystyle p\sim {\text{Beta}}(\alpha ,\beta )}
, then
x
=
logit
p
{\displaystyle x={\text{logit}}\,p}
has a type IV distribution, with parameters
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
. See beta distribution. The logit function,
l
o
g
i
t
(
p
)
=
log
p
1
−
p
{\displaystyle \mathrm {logit} (p)=\log {\frac {p}{1-p}}}
is the inverse of the logistic function. This relationship explains the name logistic-beta for this distribution: if the logistic function is applied to logistic-beta variates, the transformed distribution is beta.
= Large shape parameters
=For large values of the shape parameters,
α
,
β
≫
1
{\displaystyle \alpha ,\beta \gg 1}
, the distribution becomes more Gaussian, with:
E
[
x
]
≈
ln
α
β
var
[
x
]
≈
α
+
β
α
β
{\displaystyle {\begin{aligned}E[x]&\approx \ln {\frac {\alpha }{\beta }}\\{\text{var}}[x]&\approx {\frac {\alpha +\beta }{\alpha \beta }}\end{aligned}}}
This is demonstrated in the pdf and log pdf plots here.
= Random variate generation
=Since random sampling from the gamma and beta distributions are readily available on many software platforms, the above relationships with those distributions can be used to generate variates from the type IV distribution.
Generalization with location and scale parameters
A flexible, four-parameter family can be obtained by adding location and scale parameters. One way to do this is if
x
∼
B
σ
(
α
,
β
)
{\displaystyle x\sim B_{\sigma }(\alpha ,\beta )}
, then let
y
=
k
x
+
δ
{\displaystyle y=kx+\delta }
, where
k
>
0
{\displaystyle k>0}
is the scale parameter and
δ
∈
R
{\displaystyle \delta \in \mathbb {R} }
is the location parameter. The four-parameter family obtained thus has the desired additional flexibility, but the new parameters may be hard to interpret because
δ
≠
E
[
y
]
{\displaystyle \delta \neq E[y]}
and
k
2
≠
var
[
y
]
{\displaystyle k^{2}\neq {\text{var}}[y]}
. Moreover maximum-likelihood estimation with this parametrization is hard. These problems can be addressed as follows.
Recall that the mean and variance of
x
{\displaystyle x}
are:
μ
~
=
ψ
(
α
)
−
ψ
(
β
)
,
s
~
2
=
ψ
′
(
α
)
+
ψ
′
(
β
)
{\displaystyle {\begin{aligned}{\tilde {\mu }}&=\psi (\alpha )-\psi (\beta ),&{\tilde {s}}^{2}&=\psi '(\alpha )+\psi '(\beta )\end{aligned}}}
Now expand the family with location parameter
μ
∈
R
{\displaystyle \mu \in \mathbb {R} }
and scale parameter
s
>
0
{\displaystyle s>0}
, via the transformation:
y
=
μ
+
s
s
~
(
x
−
μ
~
)
⟺
x
=
μ
~
+
s
~
s
(
y
−
μ
)
{\displaystyle {\begin{aligned}y&=\mu +{\frac {s}{\tilde {s}}}(x-{\tilde {\mu }})\iff x={\tilde {\mu }}+{\frac {\tilde {s}}{s}}(y-\mu )\end{aligned}}}
so that
μ
=
E
[
y
]
{\displaystyle \mu =E[y]}
and
s
2
=
var
[
y
]
{\displaystyle s^{2}={\text{var}}[y]}
are now interpretable. It may be noted that allowing
s
{\displaystyle s}
to be either positive or negative does not generalize this family, because of the above-noted symmetry property. We adopt the notation
y
∼
B
¯
σ
(
α
,
β
,
μ
,
s
2
)
{\displaystyle y\sim {\bar {B}}_{\sigma }(\alpha ,\beta ,\mu ,s^{2})}
for this family.
If the pdf for
x
∼
B
σ
(
α
,
β
)
{\displaystyle x\sim B_{\sigma }(\alpha ,\beta )}
is
f
(
x
;
α
,
β
)
{\displaystyle f(x;\alpha ,\beta )}
, then the pdf for
y
∼
B
¯
σ
(
α
,
β
,
μ
,
s
2
)
{\displaystyle y\sim {\bar {B}}_{\sigma }(\alpha ,\beta ,\mu ,s^{2})}
is:
f
¯
(
y
;
α
,
β
,
μ
,
s
2
)
=
s
~
s
f
(
x
;
α
,
β
)
{\displaystyle {\bar {f}}(y;\alpha ,\beta ,\mu ,s^{2})={\frac {\tilde {s}}{s}}\,f(x;\alpha ,\beta )}
where it is understood that
x
{\displaystyle x}
is computed as detailed above, as a function of
y
,
α
,
β
,
μ
,
s
{\displaystyle y,\alpha ,\beta ,\mu ,s}
. The pdf and log-pdf plots above, where the captions contain (means=0, variances=1), are for
B
¯
σ
(
α
,
β
,
0
,
1
)
{\displaystyle {\bar {B}}_{\sigma }(\alpha ,\beta ,0,1)}
.
Maximum likelihood parameter estimation
In this section, maximum-likelihood estimation of the distribution parameters, given a dataset
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
is discussed in turn for the families
B
σ
(
α
,
β
)
{\displaystyle B_{\sigma }(\alpha ,\beta )}
and
B
¯
σ
(
α
,
β
,
μ
,
s
2
)
{\displaystyle {\bar {B}}_{\sigma }(\alpha ,\beta ,\mu ,s^{2})}
.
= Maximum likelihood for standard Type IV
=As noted above,
B
σ
(
α
,
β
)
{\displaystyle B_{\sigma }(\alpha ,\beta )}
is an exponential family with natural parameters
α
,
β
{\displaystyle \alpha ,\beta }
, the maximum-likelihood estimates of which depend only on averaged sufficient statistics:
log
σ
(
x
)
¯
=
1
n
∑
i
log
σ
(
x
i
)
and
log
σ
(
−
x
)
¯
=
1
n
∑
i
log
σ
(
−
x
i
)
{\displaystyle {\begin{aligned}{\overline {\log \sigma (x)}}&={\frac {1}{n}}\sum _{i}\log \sigma (x_{i})&&{\text{and}}&{\overline {\log \sigma (-x)}}&={\frac {1}{n}}\sum _{i}\log \sigma (-x_{i})\end{aligned}}}
Once these statistics have been accumulated, the maximum-likelihood estimate is given by:
α
^
,
β
^
=
arg
max
α
,
β
>
0
α
log
σ
(
x
)
¯
+
β
log
σ
(
−
x
)
¯
−
log
B
(
α
,
β
)
{\displaystyle {\begin{aligned}{\hat {\alpha }},{\hat {\beta }}=\arg \max _{\alpha ,\beta >0}&\;\alpha \,{\overline {\log \sigma (x)}}+\beta \,{\overline {\log \sigma (-x)}}-\log B(\alpha ,\beta )\end{aligned}}}
By using the parametrization
θ
1
=
log
α
{\displaystyle \theta _{1}=\log \alpha }
and
θ
2
=
log
β
{\displaystyle \theta _{2}=\log \beta }
an unconstrained numerical optimization algorithm like BFGS can be used. Optimization iterations are fast, because they are independent of the size of the data-set.
An alternative is to use an EM-algorithm based on the composition:
x
−
log
(
γ
δ
)
∼
B
σ
(
α
,
β
)
{\displaystyle x-\log(\gamma \delta )\sim B_{\sigma }(\alpha ,\beta )}
if
z
∼
Gamma
(
β
,
γ
)
{\displaystyle z\sim {\text{Gamma}}(\beta ,\gamma )}
and
e
x
∣
z
∼
Gamma
(
α
,
z
/
δ
)
{\displaystyle e^{x}\mid z\sim {\text{Gamma}}(\alpha ,z/\delta )}
. Because of the self-conjugacy of the gamma distribution, the posterior expectations,
⟨
z
⟩
P
(
z
∣
x
)
{\displaystyle \left\langle z\right\rangle _{P(z\mid x)}}
and
⟨
log
z
⟩
P
(
z
∣
x
)
{\displaystyle \left\langle \log z\right\rangle _{P(z\mid x)}}
that are required for the E-step can be computed in closed form. The M-step parameter update can be solved analogously to maximum-likelihood for the gamma distribution.
= Maximum likelihood for the four-parameter family
=The maximum-likelihood problem for
B
¯
σ
(
α
,
β
,
μ
,
s
2
)
{\displaystyle {\bar {B}}_{\sigma }(\alpha ,\beta ,\mu ,s^{2})}
, having pdf
f
¯
{\displaystyle {\bar {f}}}
is:
α
^
,
β
^
,
μ
^
,
s
^
=
arg
max
α
,
β
,
μ
,
s
log
1
n
∑
i
f
¯
(
x
i
;
α
,
β
,
μ
,
s
2
)
{\displaystyle {\hat {\alpha }},{\hat {\beta }},{\hat {\mu }},{\hat {s}}=\arg \max _{\alpha ,\beta ,\mu ,s}\log {\frac {1}{n}}\sum _{i}{\bar {f}}(x_{i};\alpha ,\beta ,\mu ,s^{2})}
This is no longer an exponential family, so that each optimization iteration has to traverse the whole data-set. Moreover the computation of the partial derivatives (as required for example by BFGS) is considerably more complex than for the above two-parameter case. However, all the component functions are readily available in software packages with automatic differentiation. Again, the positive parameters can be parametrized in terms of their logarithms to obtain an unconstrained numerical optimization problem.
For this problem, numerical optimization may fail unless the initial location and scale parameters are chosen appropriately. However the above-mentioned interpretability of these parameters in the parametrization of
B
¯
σ
{\displaystyle {\bar {B}}_{\sigma }}
can be used to do this. Specifically, the initial values for
μ
{\displaystyle \mu }
and
s
2
{\displaystyle s^{2}}
can be set to the empirical mean and variance of the data.
See also
Champernowne distribution, another generalization of the logistic distribution.
References
Kata Kunci Pencarian:
- Statistika
- Model generatif
- Ilmu aktuaria
- Statistika matematika
- Variabel acak
- Efek pengacau
- Eksperimen semu
- Generalized logistic distribution
- Logistic distribution
- Log-logistic distribution
- List of probability distributions
- Shifted log-logistic distribution
- Beta prime distribution
- Generalized linear model
- Generalized normal distribution
- Logistic regression
- Multinomial logistic regression