- Source: Logit-normal distribution
In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and t is the standard logistic function, then X = t(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).
A variable might be modeled as logit-normal if it is a proportion, which is bounded by zero and one, and where values of zero and one never occur.
Characterization
= Probability density function
=The probability density function (PDF) of a logit-normal distribution, for 0 < x < 1, is:
f
X
(
x
;
μ
,
σ
)
=
1
σ
2
π
1
x
(
1
−
x
)
e
−
(
logit
(
x
)
−
μ
)
2
2
σ
2
{\displaystyle f_{X}(x;\mu ,\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}\,{\frac {1}{x(1-x)}}\,e^{-{\frac {(\operatorname {logit} (x)-\mu )^{2}}{2\sigma ^{2}}}}}
where μ and σ are the mean and standard deviation of the variable’s logit (by definition, the variable’s logit is normally distributed).
The density obtained by changing the sign of μ is symmetrical, in that it is equal to f(1-x;-μ,σ), shifting the mode to the other side of 0.5 (the midpoint of the (0,1) interval).
= Moments
=The moments of the logit-normal distribution have no analytic solution. The moments can be estimated by numerical integration, however numerical integration can be prohibitive when the values of
μ
,
σ
2
{\textstyle \mu ,\sigma ^{2}}
are such that the density function diverges to infinity at the end points zero and one. An alternative is to use the observation that the logit-normal is a transformation of a normal random variable. This allows us to approximate the
n
{\displaystyle n}
-th moment via the following quasi Monte Carlo estimate
E
[
X
n
]
≈
1
K
−
1
∑
i
=
1
K
−
1
(
P
(
Φ
μ
,
σ
2
−
1
(
i
/
K
)
)
)
n
,
{\displaystyle E[X^{n}]\approx {\frac {1}{K-1}}\sum _{i=1}^{K-1}\left(P\left(\Phi _{\mu ,\sigma ^{2}}^{-1}(i/K)\right)\right)^{n},}
where
P
{\textstyle P}
is the standard logistic function, and
Φ
μ
,
σ
2
−
1
{\textstyle \Phi _{\mu ,\sigma ^{2}}^{-1}}
is the inverse cumulative distribution function of a normal distribution with mean and variance
μ
,
σ
2
{\textstyle \mu ,\sigma ^{2}}
. When
n
=
1
{\displaystyle n=1}
, this corresponds to the mean.
= Mode or modes
=When the derivative of the density equals 0 then the location of the mode x satisfies the following equation:
logit
(
x
)
=
σ
2
(
2
x
−
1
)
+
μ
.
{\displaystyle \operatorname {logit} (x)=\sigma ^{2}(2x-1)+\mu .}
For some values of the parameters there are two solutions, i.e. the distribution is bimodal.
Multivariate generalization
The logistic normal distribution is a generalization of the logit–normal distribution to D-dimensional probability vectors by taking a logistic transformation of a multivariate normal distribution.
= Probability density function
=The probability density function is:
f
X
(
x
;
μ
,
Σ
)
=
1
|
(
2
π
)
D
−
1
Σ
|
1
2
1
∏
i
=
1
D
x
i
e
−
1
2
{
log
(
x
−
D
x
D
)
−
μ
}
⊤
Σ
−
1
{
log
(
x
−
D
x
D
)
−
μ
}
,
x
∈
S
D
,
{\displaystyle f_{X}(\mathbf {x} ;{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {1}{|(2\pi )^{D-1}{\boldsymbol {\Sigma }}|^{\frac {1}{2}}}}\,{\frac {1}{\prod \limits _{i=1}^{D}x_{i}}}\,e^{-{\frac {1}{2}}\left\{\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)-{\boldsymbol {\mu }}\right\}^{\top }{\boldsymbol {\Sigma }}^{-1}\left\{\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)-{\boldsymbol {\mu }}\right\}}\quad ,\quad \mathbf {x} \in {\mathcal {S}}^{D}\;\;,}
where
x
−
D
{\displaystyle \mathbf {x} _{-D}}
denotes a vector of the first (D-1) components of
x
{\displaystyle \mathbf {x} }
and
S
D
{\displaystyle {\mathcal {S}}^{D}}
denotes the simplex of D-dimensional probability vectors. This follows from applying the additive logistic transformation to map a multivariate normal random variable
y
∼
N
(
μ
,
Σ
)
,
y
∈
R
D
−
1
{\displaystyle \mathbf {y} \sim {\mathcal {N}}\left({\boldsymbol {\mu }},{\boldsymbol {\Sigma }}\right)\;,\;\mathbf {y} \in \mathbb {R} ^{D-1}}
to the simplex:
x
=
[
e
y
1
1
+
∑
i
=
1
D
−
1
e
y
i
,
…
,
e
y
D
−
1
1
+
∑
i
=
1
D
−
1
e
y
i
,
1
1
+
∑
i
=
1
D
−
1
e
y
i
]
⊤
{\displaystyle \mathbf {x} =\left[{\frac {e^{y_{1}}}{1+\sum _{i=1}^{D-1}e^{y_{i}}}},\dots ,{\frac {e^{y_{D-1}}}{1+\sum _{i=1}^{D-1}e^{y_{i}}}},{\frac {1}{1+\sum _{i=1}^{D-1}e^{y_{i}}}}\right]^{\top }}
The unique inverse mapping is given by:
y
=
[
log
(
x
1
x
D
)
,
…
,
log
(
x
D
−
1
x
D
)
]
⊤
{\displaystyle \mathbf {y} =\left[\log \left({\frac {x_{1}}{x_{D}}}\right),\dots ,\log \left({\frac {x_{D-1}}{x_{D}}}\right)\right]^{\top }}
.
This is the case of a vector x which components sum up to one. In the case of x with sigmoidal elements, that is, when
y
=
[
log
(
x
1
1
−
x
1
)
,
…
,
log
(
x
D
1
−
x
D
)
]
⊤
{\displaystyle \mathbf {y} =\left[\log \left({\frac {x_{1}}{1-x_{1}}}\right),\dots ,\log \left({\frac {x_{D}}{1-x_{D}}}\right)\right]^{\top }}
we have
f
X
(
x
;
μ
,
Σ
)
=
1
|
(
2
π
)
D
−
1
Σ
|
1
2
1
∏
i
=
1
D
(
x
i
(
1
−
x
i
)
)
e
−
1
2
{
log
(
x
1
−
x
)
−
μ
}
⊤
Σ
−
1
{
log
(
x
1
−
x
)
−
μ
}
{\displaystyle f_{X}(\mathbf {x} ;{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {1}{|(2\pi )^{D-1}{\boldsymbol {\Sigma }}|^{\frac {1}{2}}}}\,{\frac {1}{\prod \limits _{i=1}^{D}\left(x_{i}(1-x_{i})\right)}}\,e^{-{\frac {1}{2}}\left\{\log \left({\frac {\mathbf {x} }{1-\mathbf {x} }}\right)-{\boldsymbol {\mu }}\right\}^{\top }{\boldsymbol {\Sigma }}^{-1}\left\{\log \left({\frac {\mathbf {x} }{1-\mathbf {x} }}\right)-{\boldsymbol {\mu }}\right\}}}
where the log and the division in the argument are taken element-wise. This is because the Jacobian matrix of the transformation is diagonal with elements
1
x
i
(
1
−
x
i
)
{\displaystyle {\frac {1}{x_{i}(1-x_{i})}}}
.
= Use in statistical analysis
=The logistic normal distribution is a more flexible alternative to the Dirichlet distribution in that it can capture correlations between components of probability vectors. It also has the potential to simplify statistical analyses of compositional data by allowing one to answer questions about log-ratios of the components of the data vectors. One is often interested in ratios rather than absolute component values.
The probability simplex is a bounded space, making standard techniques that are typically applied to vectors in
R
n
{\displaystyle \mathbb {R} ^{n}}
less meaningful. Statistician John Aitchison described the problem of spurious negative correlations when applying such methods directly to simplicial vectors. However, mapping compositional data in
S
D
{\displaystyle {\mathcal {S}}^{D}}
through the inverse of the additive logistic transformation yields real-valued data in
R
D
−
1
{\displaystyle \mathbb {R} ^{D-1}}
. Standard techniques can be applied to this representation of the data. This approach justifies use of the logistic normal distribution, which can thus be regarded as the "Gaussian of the simplex".
= Relationship with the Dirichlet distribution
=The Dirichlet and logistic normal distributions are never exactly equal for any choice of parameters. However, Aitchison described a method for approximating a Dirichlet with a logistic normal such that their Kullback–Leibler divergence (KL) is minimized:
K
(
p
,
q
)
=
∫
S
D
p
(
x
∣
α
)
log
(
p
(
x
∣
α
)
q
(
x
∣
μ
,
Σ
)
)
d
x
{\displaystyle K(p,q)=\int _{{\mathcal {S}}^{D}}p\left(\mathbf {x} \mid {\boldsymbol {\alpha }}\right)\log \left({\frac {p\left(\mathbf {x} \mid {\boldsymbol {\alpha }}\right)}{q\left(\mathbf {x} \mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }}\right)}}\right)\,d\mathbf {x} }
This is minimized by:
μ
∗
=
E
p
[
log
(
x
−
D
x
D
)
]
,
Σ
∗
=
Var
p
[
log
(
x
−
D
x
D
)
]
{\displaystyle {\boldsymbol {\mu }}^{*}=\mathbf {E} _{p}\left[\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)\right]\quad ,\quad {\boldsymbol {\Sigma }}^{*}={\textbf {Var}}_{p}\left[\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)\right]}
Using moment properties of the Dirichlet distribution, the solution can be written in terms of the digamma
ψ
{\displaystyle \psi }
and trigamma
ψ
′
{\displaystyle \psi '}
functions:
μ
i
∗
=
ψ
(
α
i
)
−
ψ
(
α
D
)
,
i
=
1
,
…
,
D
−
1
{\displaystyle \mu _{i}^{*}=\psi \left(\alpha _{i}\right)-\psi \left(\alpha _{D}\right)\quad ,\quad i=1,\ldots ,D-1}
Σ
i
i
∗
=
ψ
′
(
α
i
)
+
ψ
′
(
α
D
)
,
i
=
1
,
…
,
D
−
1
{\displaystyle \Sigma _{ii}^{*}=\psi '\left(\alpha _{i}\right)+\psi '\left(\alpha _{D}\right)\quad ,\quad i=1,\ldots ,D-1}
Σ
i
j
∗
=
ψ
′
(
α
D
)
,
i
≠
j
{\displaystyle \Sigma _{ij}^{*}=\psi '\left(\alpha _{D}\right)\quad ,\quad i\neq j}
This approximation is particularly accurate for large
α
{\displaystyle {\boldsymbol {\alpha }}}
. In fact, one can show that for
α
i
→
∞
,
i
=
1
,
…
,
D
{\displaystyle \alpha _{i}\rightarrow \infty ,i=1,\ldots ,D}
, we have that
p
(
x
∣
α
)
→
q
(
x
∣
μ
∗
,
Σ
∗
)
{\displaystyle p\left(\mathbf {x} \mid {\boldsymbol {\alpha }}\right)\rightarrow q\left(\mathbf {x} \mid {\boldsymbol {\mu }}^{*},{\boldsymbol {\Sigma }}^{*}\right)}
.
See also
Beta distribution and Kumaraswamy distribution, other two-parameter distributions on a bounded interval with similar shapes
References
Further reading
Frederic, P. & Lad, F. (2008) Two Moments of the Logitnormal Distribution. Communications in Statistics-Simulation and Computation. 37: 1263-1269
Mead, R. (1965). "A Generalised Logit-Normal Distribution". Biometrics. 21 (3): 721–732. doi:10.2307/2528553. JSTOR 2528553. PMID 5858101.
External links
logitnorm package for R
Kata Kunci Pencarian:
- Logaritma
- Logit-normal distribution
- Logit
- Normal distribution
- Multivariate normal distribution
- Logistic distribution
- List of probability distributions
- Mixed logit
- Beta distribution
- Discrete choice
- Logistic regression