- Source: Wishart distribution
In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble (in random matrix theory, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve Laguerre polynomials), or LOE, LUE, LSE (in analogy with GOE, GUE, GSE).
It is a family of probability distributions defined over symmetric, positive-definite random matrices (i.e. matrix-valued random variables). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector.
Definition
Suppose G is a p × n matrix, each column of which is independently drawn from a p-variate normal distribution with zero mean:
G
=
(
g
i
1
,
…
,
g
i
n
)
∼
N
p
(
0
,
V
)
.
{\displaystyle G=(g_{i}^{1},\dots ,g_{i}^{n})\sim {\mathcal {N}}_{p}(0,V).}
Then the Wishart distribution is the probability distribution of the p × p random matrix
S
=
G
G
T
=
∑
i
=
1
n
g
i
g
i
T
{\displaystyle S=GG^{T}=\sum _{i=1}^{n}g_{i}g_{i}^{T}}
known as the scatter matrix. One indicates that S has that probability distribution by writing
S
∼
W
p
(
V
,
n
)
.
{\displaystyle S\sim W_{p}(V,n).}
The positive integer n is the number of degrees of freedom. Sometimes this is written W(V, p, n). For n ≥ p the matrix S is invertible with probability 1 if V is invertible.
If p = V = 1 then this distribution is a chi-squared distribution with n degrees of freedom.
Occurrence
The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices and in multidimensional Bayesian analysis. It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels .
Probability density function
The Wishart distribution can be characterized by its probability density function as follows:
Let X be a p × p symmetric matrix of random variables that is positive semi-definite. Let V be a (fixed) symmetric positive definite matrix of size p × p.
Then, if n ≥ p, X has a Wishart distribution with n degrees of freedom if it has the probability density function
f
X
(
X
)
=
1
2
n
p
/
2
|
V
|
n
/
2
Γ
p
(
n
2
)
|
X
|
(
n
−
p
−
1
)
/
2
e
−
1
2
tr
(
V
−
1
X
)
{\displaystyle f_{\mathbf {X} }(\mathbf {X} )={\frac {1}{2^{np/2}\left|{\mathbf {V} }\right|^{n/2}\Gamma _{p}\left({\frac {n}{2}}\right)}}{\left|\mathbf {X} \right|}^{(n-p-1)/2}e^{-{\frac {1}{2}}\operatorname {tr} ({\mathbf {V} }^{-1}\mathbf {X} )}}
where
|
X
|
{\displaystyle \left|{\mathbf {X} }\right|}
is the determinant of
X
{\displaystyle \mathbf {X} }
and Γp is the multivariate gamma function defined as
Γ
p
(
n
2
)
=
π
p
(
p
−
1
)
/
4
∏
j
=
1
p
Γ
(
n
2
−
j
−
1
2
)
.
{\displaystyle \Gamma _{p}\left({\frac {n}{2}}\right)=\pi ^{p(p-1)/4}\prod _{j=1}^{p}\Gamma \left({\frac {n}{2}}-{\frac {j-1}{2}}\right).}
The density above is not the joint density of all the
p
2
{\displaystyle p^{2}}
elements of the random matrix X (such
p
2
{\displaystyle p^{2}}
-dimensional density does not exist because of the symmetry constrains
X
i
j
=
X
j
i
{\displaystyle X_{ij}=X_{ji}}
), it is rather the joint density of
p
(
p
+
1
)
/
2
{\displaystyle p(p+1)/2}
elements
X
i
j
{\displaystyle X_{ij}}
for
i
≤
j
{\displaystyle i\leq j}
(, page 38). Also, the density formula above applies only to positive definite matrices
x
;
{\displaystyle \mathbf {x} ;}
for other matrices the density is equal to zero.
= Spectral density
=The joint-eigenvalue density for the eigenvalues
λ
1
,
…
,
λ
p
≥
0
{\displaystyle \lambda _{1},\dots ,\lambda _{p}\geq 0}
of a random matrix
X
∼
W
p
(
I
,
n
)
{\displaystyle \mathbf {X} \sim W_{p}(\mathbf {I} ,n)}
is,
c
n
,
p
e
−
1
2
∑
i
λ
i
∏
λ
i
(
n
−
p
−
1
)
/
2
∏
i
<
j
|
λ
i
−
λ
j
|
{\displaystyle c_{n,p}e^{-{\frac {1}{2}}\sum _{i}\lambda _{i}}\prod \lambda _{i}^{(n-p-1)/2}\prod _{i
where
c
n
,
p
{\displaystyle c_{n,p}}
is a constant.
In fact the above definition can be extended to any real n > p − 1. If n ≤ p − 1, then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lower-dimension subspace of the space of p × p matrices.
Use in Bayesian statistics
In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the precision matrix Ω = Σ−1, where Σ is the covariance matrix.: 135
= Choice of parameters
=The least informative, proper Wishart prior is obtained by setting n = p.
A common choice for V leverages the fact that the mean of X ~Wp(V, n) is nV. Then V is chosen so that nV equals an initial guess for X. For instance, when estimating a precision matrix Σ−1 ~ Wp(V, n) a reasonable choice for V would be n−1Σ0−1, where Σ0 is some prior estimate for the covariance matrix Σ.
Properties
= Log-expectation
=The following formula plays a role in variational Bayes derivations for Bayes networks
involving the Wishart distribution. From equation (2.63),
E
[
ln
|
X
|
]
=
ψ
p
(
n
2
)
+
p
ln
(
2
)
+
ln
|
V
|
{\displaystyle \operatorname {E} [\,\ln \left|\mathbf {X} \right|\,]=\psi _{p}\left({\frac {n}{2}}\right)+p\,\ln(2)+\ln |\mathbf {V} |}
where
ψ
p
{\displaystyle \psi _{p}}
is the multivariate digamma function (the derivative of the log of the multivariate gamma function).
= Log-variance
=The following variance computation could be of help in Bayesian statistics:
Var
[
ln
|
X
|
]
=
∑
i
=
1
p
ψ
1
(
n
+
1
−
i
2
)
{\displaystyle \operatorname {Var} \left[\,\ln \left|\mathbf {X} \right|\,\right]=\sum _{i=1}^{p}\psi _{1}\left({\frac {n+1-i}{2}}\right)}
where
ψ
1
{\displaystyle \psi _{1}}
is the trigamma function. This comes up when computing the Fisher information of the Wishart random variable.
= Entropy
=The information entropy of the distribution has the following formula:: 693
H
[
X
]
=
−
ln
(
B
(
V
,
n
)
)
−
n
−
p
−
1
2
E
[
ln
|
X
|
]
+
n
p
2
{\displaystyle \operatorname {H} \left[\,\mathbf {X} \,\right]=-\ln \left(B(\mathbf {V} ,n)\right)-{\frac {n-p-1}{2}}\operatorname {E} \left[\,\ln \left|\mathbf {X} \right|\,\right]+{\frac {np}{2}}}
where B(V, n) is the normalizing constant of the distribution:
B
(
V
,
n
)
=
1
|
V
|
n
/
2
2
n
p
/
2
Γ
p
(
n
2
)
.
{\displaystyle B(\mathbf {V} ,n)={\frac {1}{\left|\mathbf {V} \right|^{n/2}2^{np/2}\Gamma _{p}\left({\frac {n}{2}}\right)}}.}
This can be expanded as follows:
H
[
X
]
=
n
2
ln
|
V
|
+
n
p
2
ln
2
+
ln
Γ
p
(
n
2
)
−
n
−
p
−
1
2
E
[
ln
|
X
|
]
+
n
p
2
=
n
2
ln
|
V
|
+
n
p
2
ln
2
+
ln
Γ
p
(
n
2
)
−
n
−
p
−
1
2
(
ψ
p
(
n
2
)
+
p
ln
2
+
ln
|
V
|
)
+
n
p
2
=
n
2
ln
|
V
|
+
n
p
2
ln
2
+
ln
Γ
p
(
n
2
)
−
n
−
p
−
1
2
ψ
p
(
n
2
)
−
n
−
p
−
1
2
(
p
ln
2
+
ln
|
V
|
)
+
n
p
2
=
p
+
1
2
ln
|
V
|
+
1
2
p
(
p
+
1
)
ln
2
+
ln
Γ
p
(
n
2
)
−
n
−
p
−
1
2
ψ
p
(
n
2
)
+
n
p
2
{\displaystyle {\begin{aligned}\operatorname {H} \left[\,\mathbf {X} \,\right]&={\frac {n}{2}}\ln \left|\mathbf {V} \right|+{\frac {np}{2}}\ln 2+\ln \Gamma _{p}\left({\frac {n}{2}}\right)-{\frac {n-p-1}{2}}\operatorname {E} \left[\,\ln \left|\mathbf {X} \right|\,\right]+{\frac {np}{2}}\\[8pt]&={\frac {n}{2}}\ln \left|\mathbf {V} \right|+{\frac {np}{2}}\ln 2+\ln \Gamma _{p}\left({\frac {n}{2}}\right)-{\frac {n-p-1}{2}}\left(\psi _{p}\left({\frac {n}{2}}\right)+p\ln 2+\ln \left|\mathbf {V} \right|\right)+{\frac {np}{2}}\\[8pt]&={\frac {n}{2}}\ln \left|\mathbf {V} \right|+{\frac {np}{2}}\ln 2+\ln \Gamma _{p}\left({\frac {n}{2}}\right)-{\frac {n-p-1}{2}}\psi _{p}\left({\frac {n}{2}}\right)-{\frac {n-p-1}{2}}\left(p\ln 2+\ln \left|\mathbf {V} \right|\right)+{\frac {np}{2}}\\[8pt]&={\frac {p+1}{2}}\ln \left|\mathbf {V} \right|+{\frac {1}{2}}p(p+1)\ln 2+\ln \Gamma _{p}\left({\frac {n}{2}}\right)-{\frac {n-p-1}{2}}\psi _{p}\left({\frac {n}{2}}\right)+{\frac {np}{2}}\end{aligned}}}
= Cross-entropy
=The cross-entropy of two Wishart distributions
p
0
{\displaystyle p_{0}}
with parameters
n
0
,
V
0
{\displaystyle n_{0},V_{0}}
and
p
1
{\displaystyle p_{1}}
with parameters
n
1
,
V
1
{\displaystyle n_{1},V_{1}}
is
H
(
p
0
,
p
1
)
=
E
p
0
[
−
log
p
1
]
=
E
p
0
[
−
log
|
X
|
(
n
1
−
p
1
−
1
)
/
2
e
−
tr
(
V
1
−
1
X
)
/
2
2
n
1
p
1
/
2
|
V
1
|
n
1
/
2
Γ
p
1
(
n
1
2
)
]
=
n
1
p
1
2
log
2
+
n
1
2
log
|
V
1
|
+
log
Γ
p
1
(
n
1
2
)
−
n
1
−
p
1
−
1
2
E
p
0
[
log
|
X
|
]
+
1
2
E
p
0
[
tr
(
V
1
−
1
X
)
]
=
n
1
p
1
2
log
2
+
n
1
2
log
|
V
1
|
+
log
Γ
p
1
(
n
1
2
)
−
n
1
−
p
1
−
1
2
(
ψ
p
0
(
n
0
2
)
+
p
0
log
2
+
log
|
V
0
|
)
+
1
2
tr
(
V
1
−
1
n
0
V
0
)
=
−
n
1
2
log
|
V
1
−
1
V
0
|
+
p
1
+
1
2
log
|
V
0
|
+
n
0
2
tr
(
V
1
−
1
V
0
)
+
log
Γ
p
1
(
n
1
2
)
−
n
1
−
p
1
−
1
2
ψ
p
0
(
n
0
2
)
+
n
1
(
p
1
−
p
0
)
+
p
0
(
p
1
+
1
)
2
log
2
{\displaystyle {\begin{aligned}H(p_{0},p_{1})&=\operatorname {E} _{p_{0}}[\,-\log p_{1}\,]\\[8pt]&=\operatorname {E} _{p_{0}}\left[\,-\log {\frac {\left|\mathbf {X} \right|^{(n_{1}-p_{1}-1)/2}e^{-\operatorname {tr} (\mathbf {V} _{1}^{-1}\mathbf {X} )/2}}{2^{n_{1}p_{1}/2}\left|\mathbf {V} _{1}\right|^{n_{1}/2}\Gamma _{p_{1}}\left({\tfrac {n_{1}}{2}}\right)}}\right]\\[8pt]&={\tfrac {n_{1}p_{1}}{2}}\log 2+{\tfrac {n_{1}}{2}}\log \left|\mathbf {V} _{1}\right|+\log \Gamma _{p_{1}}({\tfrac {n_{1}}{2}})-{\tfrac {n_{1}-p_{1}-1}{2}}\operatorname {E} _{p_{0}}\left[\,\log \left|\mathbf {X} \right|\,\right]+{\tfrac {1}{2}}\operatorname {E} _{p_{0}}\left[\,\operatorname {tr} \left(\,\mathbf {V} _{1}^{-1}\mathbf {X} \,\right)\,\right]\\[8pt]&={\tfrac {n_{1}p_{1}}{2}}\log 2+{\tfrac {n_{1}}{2}}\log \left|\mathbf {V} _{1}\right|+\log \Gamma _{p_{1}}({\tfrac {n_{1}}{2}})-{\tfrac {n_{1}-p_{1}-1}{2}}\left(\psi _{p_{0}}({\tfrac {n_{0}}{2}})+p_{0}\log 2+\log \left|\mathbf {V} _{0}\right|\right)+{\tfrac {1}{2}}\operatorname {tr} \left(\,\mathbf {V} _{1}^{-1}n_{0}\mathbf {V} _{0}\,\right)\\[8pt]&=-{\tfrac {n_{1}}{2}}\log \left|\,\mathbf {V} _{1}^{-1}\mathbf {V} _{0}\,\right|+{\tfrac {p_{1}+1}{2}}\log \left|\mathbf {V} _{0}\right|+{\tfrac {n_{0}}{2}}\operatorname {tr} \left(\,\mathbf {V} _{1}^{-1}\mathbf {V} _{0}\right)+\log \Gamma _{p_{1}}\left({\tfrac {n_{1}}{2}}\right)-{\tfrac {n_{1}-p_{1}-1}{2}}\psi _{p_{0}}({\tfrac {n_{0}}{2}})+{\tfrac {n_{1}(p_{1}-p_{0})+p_{0}(p_{1}+1)}{2}}\log 2\end{aligned}}}
Note that when
p
0
=
p
1
{\displaystyle p_{0}=p_{1}}
and
n
0
=
n
1
{\displaystyle n_{0}=n_{1}}
we recover the entropy.
= KL-divergence
=The Kullback–Leibler divergence of
p
1
{\displaystyle p_{1}}
from
p
0
{\displaystyle p_{0}}
is
D
K
L
(
p
0
‖
p
1
)
=
H
(
p
0
,
p
1
)
−
H
(
p
0
)
=
−
n
1
2
log
|
V
1
−
1
V
0
|
+
n
0
2
(
tr
(
V
1
−
1
V
0
)
−
p
)
+
log
Γ
p
(
n
1
2
)
Γ
p
(
n
0
2
)
+
n
0
−
n
1
2
ψ
p
(
n
0
2
)
{\displaystyle {\begin{aligned}D_{KL}(p_{0}\|p_{1})&=H(p_{0},p_{1})-H(p_{0})\\[6pt]&=-{\frac {n_{1}}{2}}\log |\mathbf {V} _{1}^{-1}\mathbf {V} _{0}|+{\frac {n_{0}}{2}}(\operatorname {tr} (\mathbf {V} _{1}^{-1}\mathbf {V} _{0})-p)+\log {\frac {\Gamma _{p}\left({\frac {n_{1}}{2}}\right)}{\Gamma _{p}\left({\frac {n_{0}}{2}}\right)}}+{\tfrac {n_{0}-n_{1}}{2}}\psi _{p}\left({\frac {n_{0}}{2}}\right)\end{aligned}}}
= Characteristic function
=The characteristic function of the Wishart distribution is
Θ
↦
E
[
exp
(
i
tr
(
X
Θ
)
)
]
=
|
1
−
2
i
Θ
V
|
−
n
/
2
{\displaystyle \Theta \mapsto \operatorname {E} \left[\,\exp \left(\,i\operatorname {tr} \left(\,\mathbf {X} {\mathbf {\Theta } }\,\right)\,\right)\,\right]=\left|\,1-2i\,{\mathbf {\Theta } }\,{\mathbf {V} }\,\right|^{-n/2}}
where E[⋅] denotes expectation. (Here Θ is any matrix with the same dimensions as V, 1 indicates the identity matrix, and i is a square root of −1). Properly interpreting this formula requires a little care, because noninteger complex powers are multivalued; when n is noninteger, the correct branch must be determined via analytic continuation.
Theorem
If a p × p random matrix X has a Wishart distribution with m degrees of freedom and variance matrix V — write
X
∼
W
p
(
V
,
m
)
{\displaystyle \mathbf {X} \sim {\mathcal {W}}_{p}({\mathbf {V} },m)}
— and C is a q × p matrix of rank q, then
C
X
C
T
∼
W
q
(
C
V
C
T
,
m
)
.
{\displaystyle \mathbf {C} \mathbf {X} {\mathbf {C} }^{T}\sim {\mathcal {W}}_{q}\left({\mathbf {C} }{\mathbf {V} }{\mathbf {C} }^{T},m\right).}
= Corollary 1
=If z is a nonzero p × 1 constant vector, then:
σ
z
−
2
z
T
X
z
∼
χ
m
2
.
{\displaystyle \sigma _{z}^{-2}\,{\mathbf {z} }^{T}\mathbf {X} {\mathbf {z} }\sim \chi _{m}^{2}.}
In this case,
χ
m
2
{\displaystyle \chi _{m}^{2}}
is the chi-squared distribution and
σ
z
2
=
z
T
V
z
{\displaystyle \sigma _{z}^{2}={\mathbf {z} }^{T}{\mathbf {V} }{\mathbf {z} }}
(note that
σ
z
2
{\displaystyle \sigma _{z}^{2}}
is a constant; it is positive because V is positive definite).
= Corollary 2
=Consider the case where zT = (0, ..., 0, 1, 0, ..., 0) (that is, the j-th element is one and all others zero). Then corollary 1 above shows that
σ
j
j
−
1
w
j
j
∼
χ
m
2
{\displaystyle \sigma _{jj}^{-1}\,w_{jj}\sim \chi _{m}^{2}}
gives the marginal distribution of each of the elements on the matrix's diagonal.
George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the off-diagonal elements is not chi-squared. Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family.
Estimator of the multivariate normal distribution
The Wishart distribution is the sampling distribution of the maximum-likelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution. A derivation of the MLE uses the spectral theorem.
Bartlett decomposition
The Bartlett decomposition of a matrix X from a p-variate Wishart distribution with scale matrix V and n degrees of freedom is the factorization:
X
=
L
A
A
T
L
T
,
{\displaystyle \mathbf {X} ={\textbf {L}}{\textbf {A}}{\textbf {A}}^{T}{\textbf {L}}^{T},}
where L is the Cholesky factor of V, and:
A
=
(
c
1
0
0
⋯
0
n
21
c
2
0
⋯
0
n
31
n
32
c
3
⋯
0
⋮
⋮
⋮
⋱
⋮
n
p
1
n
p
2
n
p
3
⋯
c
p
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}c_{1}&0&0&\cdots &0\\n_{21}&c_{2}&0&\cdots &0\\n_{31}&n_{32}&c_{3}&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\n_{p1}&n_{p2}&n_{p3}&\cdots &c_{p}\end{pmatrix}}}
where
c
i
2
∼
χ
n
−
i
+
1
2
{\displaystyle c_{i}^{2}\sim \chi _{n-i+1}^{2}}
and nij ~ N(0, 1) independently. This provides a useful method for obtaining random samples from a Wishart distribution.
Marginal distribution of matrix elements
Let V be a 2 × 2 variance matrix characterized by correlation coefficient −1 < ρ < 1 and L its lower Cholesky factor:
V
=
(
σ
1
2
ρ
σ
1
σ
2
ρ
σ
1
σ
2
σ
2
2
)
,
L
=
(
σ
1
0
ρ
σ
2
1
−
ρ
2
σ
2
)
{\displaystyle \mathbf {V} ={\begin{pmatrix}\sigma _{1}^{2}&\rho \sigma _{1}\sigma _{2}\\\rho \sigma _{1}\sigma _{2}&\sigma _{2}^{2}\end{pmatrix}},\qquad \mathbf {L} ={\begin{pmatrix}\sigma _{1}&0\\\rho \sigma _{2}&{\sqrt {1-\rho ^{2}}}\sigma _{2}\end{pmatrix}}}
Multiplying through the Bartlett decomposition above, we find that a random sample from the 2 × 2 Wishart distribution is
X
=
(
σ
1
2
c
1
2
σ
1
σ
2
(
ρ
c
1
2
+
1
−
ρ
2
c
1
n
21
)
σ
1
σ
2
(
ρ
c
1
2
+
1
−
ρ
2
c
1
n
21
)
σ
2
2
(
(
1
−
ρ
2
)
c
2
2
+
(
1
−
ρ
2
n
21
+
ρ
c
1
)
2
)
)
{\displaystyle \mathbf {X} ={\begin{pmatrix}\sigma _{1}^{2}c_{1}^{2}&\sigma _{1}\sigma _{2}\left(\rho c_{1}^{2}+{\sqrt {1-\rho ^{2}}}c_{1}n_{21}\right)\\\sigma _{1}\sigma _{2}\left(\rho c_{1}^{2}+{\sqrt {1-\rho ^{2}}}c_{1}n_{21}\right)&\sigma _{2}^{2}\left(\left(1-\rho ^{2}\right)c_{2}^{2}+\left({\sqrt {1-\rho ^{2}}}n_{21}+\rho c_{1}\right)^{2}\right)\end{pmatrix}}}
The diagonal elements, most evidently in the first element, follow the χ2 distribution with n degrees of freedom (scaled by σ2) as expected. The off-diagonal element is less familiar but can be identified as a normal variance-mean mixture where the mixing density is a χ2 distribution. The corresponding marginal probability density for the off-diagonal element is therefore the variance-gamma distribution
f
(
x
12
)
=
|
x
12
|
n
−
1
2
Γ
(
n
2
)
2
n
−
1
π
(
1
−
ρ
2
)
(
σ
1
σ
2
)
n
+
1
⋅
K
n
−
1
2
(
|
x
12
|
σ
1
σ
2
(
1
−
ρ
2
)
)
exp
(
ρ
x
12
σ
1
σ
2
(
1
−
ρ
2
)
)
{\displaystyle f(x_{12})={\frac {\left|x_{12}\right|^{\frac {n-1}{2}}}{\Gamma \left({\frac {n}{2}}\right){\sqrt {2^{n-1}\pi \left(1-\rho ^{2}\right)\left(\sigma _{1}\sigma _{2}\right)^{n+1}}}}}\cdot K_{\frac {n-1}{2}}\left({\frac {\left|x_{12}\right|}{\sigma _{1}\sigma _{2}\left(1-\rho ^{2}\right)}}\right)\exp {\left({\frac {\rho x_{12}}{\sigma _{1}\sigma _{2}(1-\rho ^{2})}}\right)}}
where Kν(z) is the modified Bessel function of the second kind. Similar results may be found for higher dimensions. In general, if
X
{\displaystyle X}
follows a Wishart distribution with parameters,
Σ
,
n
{\displaystyle \Sigma ,n}
, then for
i
≠
j
{\displaystyle i\neq j}
, the off-diagonal elements
X
i
j
∼
VG
(
n
,
Σ
i
j
,
(
Σ
i
i
Σ
j
j
−
Σ
i
j
2
)
1
/
2
,
0
)
{\displaystyle X_{ij}\sim {\text{VG}}(n,\Sigma _{ij},(\Sigma _{ii}\Sigma _{jj}-\Sigma _{ij}^{2})^{1/2},0)}
.
It is also possible to write down the moment-generating function even in the noncentral case (essentially the nth power of Craig (1936) equation 10) although the probability density becomes an infinite sum of Bessel functions.
The range of the shape parameter
It can be shown that the Wishart distribution can be defined if and only if the shape parameter n belongs to the set
Λ
p
:=
{
0
,
…
,
p
−
1
}
∪
(
p
−
1
,
∞
)
.
{\displaystyle \Lambda _{p}:=\{0,\ldots ,p-1\}\cup \left(p-1,\infty \right).}
This set is named after Gindikin, who introduced it in the 1970s in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,
Λ
p
∗
:=
{
0
,
…
,
p
−
1
}
,
{\displaystyle \Lambda _{p}^{*}:=\{0,\ldots ,p-1\},}
the corresponding Wishart distribution has no Lebesgue density.
Relationships to other distributions
The Wishart distribution is related to the inverse-Wishart distribution, denoted by
W
p
−
1
{\displaystyle W_{p}^{-1}}
, as follows: If X ~ Wp(V, n) and if we do the change of variables C = X−1, then
C
∼
W
p
−
1
(
V
−
1
,
n
)
{\displaystyle \mathbf {C} \sim W_{p}^{-1}(\mathbf {V} ^{-1},n)}
. This relationship may be derived by noting that the absolute value of the Jacobian determinant of this change of variables is |C|p+1, see for example equation (15.15) in.
In Bayesian statistics, the Wishart distribution is a conjugate prior for the precision parameter of the multivariate normal distribution, when the mean parameter is known.
A generalization is the multivariate gamma distribution.
A different type of generalization is the normal-Wishart distribution, essentially the product of a multivariate normal distribution with a Wishart distribution.
See also
References
External links
A C++ library for random matrix generator
Kata Kunci Pencarian:
- Amerika Serikat
- Sosialisme
- Energi terbarukan di Skotlandia
- Wishart distribution
- Inverse-Wishart distribution
- Normal-inverse-Wishart distribution
- Normal-Wishart distribution
- Complex Wishart distribution
- John Wishart (statistician)
- Chi-squared distribution
- F-distribution
- Gamma distribution
- Student's t-distribution