- Source: Marginal likelihood
A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample for all possible values of the parameters; it can be understood as the probability of the model itself and is therefore often referred to as model evidence or simply evidence.
Due to the integration over the parameter space, the marginal likelihood does not directly depend upon the parameters. If the focus is not on model comparison, the marginal likelihood is simply the normalizing constant that ensures that the posterior is a proper probability. It is related to the partition function in statistical mechanics.
Concept
Given a set of independent identically distributed data points
X
=
(
x
1
,
…
,
x
n
)
,
{\displaystyle \mathbf {X} =(x_{1},\ldots ,x_{n}),}
where
x
i
∼
p
(
x
|
θ
)
{\displaystyle x_{i}\sim p(x|\theta )}
according to some probability distribution parameterized by
θ
{\displaystyle \theta }
, where
θ
{\displaystyle \theta }
itself is a random variable described by a distribution, i.e.
θ
∼
p
(
θ
∣
α
)
,
{\displaystyle \theta \sim p(\theta \mid \alpha ),}
the marginal likelihood in general asks what the probability
p
(
X
∣
α
)
{\displaystyle p(\mathbf {X} \mid \alpha )}
is, where
θ
{\displaystyle \theta }
has been marginalized out (integrated out):
p
(
X
∣
α
)
=
∫
θ
p
(
X
∣
θ
)
p
(
θ
∣
α
)
d
θ
{\displaystyle p(\mathbf {X} \mid \alpha )=\int _{\theta }p(\mathbf {X} \mid \theta )\,p(\theta \mid \alpha )\ \operatorname {d} \!\theta }
The above definition is phrased in the context of Bayesian statistics in which case
p
(
θ
∣
α
)
{\displaystyle p(\theta \mid \alpha )}
is called prior density and
p
(
X
∣
θ
)
{\displaystyle p(\mathbf {X} \mid \theta )}
is the likelihood. The marginal likelihood quantifies the agreement between data and prior in a geometric sense made precise in de Carvalho et al. (2019). In classical (frequentist) statistics, the concept of marginal likelihood occurs instead in the context of a joint parameter
θ
=
(
ψ
,
λ
)
{\displaystyle \theta =(\psi ,\lambda )}
, where
ψ
{\displaystyle \psi }
is the actual parameter of interest, and
λ
{\displaystyle \lambda }
is a non-interesting nuisance parameter. If there exists a probability distribution for
λ
{\displaystyle \lambda }
, it is often desirable to consider the likelihood function only in terms of
ψ
{\displaystyle \psi }
, by marginalizing out
λ
{\displaystyle \lambda }
:
L
(
ψ
;
X
)
=
p
(
X
∣
ψ
)
=
∫
λ
p
(
X
∣
λ
,
ψ
)
p
(
λ
∣
ψ
)
d
λ
{\displaystyle {\mathcal {L}}(\psi ;\mathbf {X} )=p(\mathbf {X} \mid \psi )=\int _{\lambda }p(\mathbf {X} \mid \lambda ,\psi )\,p(\lambda \mid \psi )\ \operatorname {d} \!\lambda }
Unfortunately, marginal likelihoods are generally difficult to compute. Exact solutions are known for a small class of distributions, particularly when the marginalized-out parameter is the conjugate prior of the distribution of the data. In other cases, some kind of numerical integration method is needed, either a general method such as Gaussian integration or a Monte Carlo method, or a method specialized to statistical problems such as the Laplace approximation, Gibbs/Metropolis sampling, or the EM algorithm.
It is also possible to apply the above considerations to a single random variable (data point)
x
{\displaystyle x}
, rather than a set of observations. In a Bayesian context, this is equivalent to the prior predictive distribution of a data point.
Applications
= Bayesian model comparison
=In Bayesian model comparison, the marginalized variables
θ
{\displaystyle \theta }
are parameters for a particular type of model, and the remaining variable
M
{\displaystyle M}
is the identity of the model itself. In this case, the marginalized likelihood is the probability of the data given the model type, not assuming any particular model parameters. Writing
θ
{\displaystyle \theta }
for the model parameters, the marginal likelihood for the model M is
p
(
X
∣
M
)
=
∫
p
(
X
∣
θ
,
M
)
p
(
θ
∣
M
)
d
θ
{\displaystyle p(\mathbf {X} \mid M)=\int p(\mathbf {X} \mid \theta ,M)\,p(\theta \mid M)\,\operatorname {d} \!\theta }
It is in this context that the term model evidence is normally used. This quantity is important because the posterior odds ratio for a model M1 against another model M2 involves a ratio of marginal likelihoods, called the Bayes factor:
p
(
M
1
∣
X
)
p
(
M
2
∣
X
)
=
p
(
M
1
)
p
(
M
2
)
p
(
X
∣
M
1
)
p
(
X
∣
M
2
)
{\displaystyle {\frac {p(M_{1}\mid \mathbf {X} )}{p(M_{2}\mid \mathbf {X} )}}={\frac {p(M_{1})}{p(M_{2})}}\,{\frac {p(\mathbf {X} \mid M_{1})}{p(\mathbf {X} \mid M_{2})}}}
which can be stated schematically as
posterior odds = prior odds × Bayes factor
See also
Empirical Bayes methods
Lindley's paradox
Marginal probability
Bayesian information criterion
References
Further reading
Charles S. Bos. "A comparison of marginal likelihood computation methods". In W. Härdle and B. Ronz, editors, COMPSTAT 2002: Proceedings in Computational Statistics, pp. 111–117. 2002. (Available as a preprint on SSRN 332860)
de Carvalho, Miguel; Page, Garritt; Barney, Bradley (2019). "On the geometry of Bayesian inference". Bayesian Analysis. 14 (4): 1013‒1036. (Available as a preprint on the web: [1])
Lambert, Ben (2018). "The devil is in the denominator". A Student's Guide to Bayesian Statistics. Sage. pp. 109–120. ISBN 978-1-4739-1636-4.
The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay.
Kata Kunci Pencarian:
- Variabel acak
- Garis waktu peristiwa jauh di masa depan
- Marginal likelihood
- Likelihood function
- Posterior predictive distribution
- Marginal distribution
- Kalman filter
- Bayes factor
- Bayesian inference
- Categorical distribution
- Random effects model
- Laplace's approximation