- Source: Bayesian hierarchical modeling
Bayesian hierarchical modelling is a statistical model written in multiple levels (hierarchical form) that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is the posterior distribution, also known as the updated probability estimate, as additional evidence on the prior distribution is acquired.
Frequentist statistics may yield conclusions seemingly incompatible with those offered by Bayesian statistics due to the Bayesian treatment of the parameters as random variables and its use of subjective information in establishing assumptions on these parameters. As the approaches answer different questions the formal results aren't technically contradictory but the two approaches disagree over which answer is relevant to particular applications. Bayesians argue that relevant information regarding decision-making and updating beliefs cannot be ignored and that hierarchical modeling has the potential to overrule classical methods in applications where respondents give multiple observational data. Moreover, the model has proven to be robust, with the posterior distribution less sensitive to the more flexible hierarchical priors.
Hierarchical modeling is used when information is available on several different levels of observational units. For example, in epidemiological modeling to describe infection trajectories for multiple countries, observational units are countries, and each country has its own temporal profile of daily infected cases. In decline curve analysis to describe oil or gas production decline curve for multiple wells, observational units are oil or gas wells in a reservoir region, and each well has each own temporal profile of oil or gas production rates (usually, barrels per month). Data structure for the hierarchical modeling retains nested data structure. The hierarchical form of analysis and organization helps in the understanding of multiparameter problems and also plays an important role in developing computational strategies.
Philosophy
Statistical methods and models commonly involve multiple parameters that can be regarded as related or connected in such a way that the problem implies a dependence of the joint probability model for these parameters.
Individual degrees of belief, expressed in the form of probabilities, come with uncertainty. Amidst this is the change of the degrees of belief over time. As was stated by Professor José M. Bernardo and Professor Adrian F. Smith, “The actuality of the learning process consists in the evolution of individual and subjective beliefs about the reality.” These subjective probabilities are more directly involved in the mind rather than the physical probabilities. Hence, it is with this need of updating beliefs that Bayesians have formulated an alternative statistical model which takes into account the prior occurrence of a particular event.
Bayes' theorem
The assumed occurrence of a real-world event will typically modify preferences between certain options. This is done by modifying the degrees of belief attached, by an individual, to the events defining the options.
Suppose in a study of the effectiveness of cardiac treatments, with the patients in hospital j having survival probability
θ
j
{\displaystyle \theta _{j}}
, the survival probability will be updated with the occurrence of y, the event in which a controversial serum is created which, as believed by some, increases survival in cardiac patients.
In order to make updated probability statements about
θ
j
{\displaystyle \theta _{j}}
, given the occurrence of event y, we must begin with a model providing a joint probability distribution for
θ
j
{\displaystyle \theta _{j}}
and y. This can be written as a product of the two distributions that are often referred to as the prior distribution
P
(
θ
)
{\displaystyle P(\theta )}
and the sampling distribution
P
(
y
∣
θ
)
{\displaystyle P(y\mid \theta )}
respectively:
P
(
θ
,
y
)
=
P
(
θ
)
P
(
y
∣
θ
)
{\displaystyle P(\theta ,y)=P(\theta )P(y\mid \theta )}
Using the basic property of conditional probability, the posterior distribution will yield:
P
(
θ
∣
y
)
=
P
(
θ
,
y
)
P
(
y
)
=
P
(
y
∣
θ
)
P
(
θ
)
P
(
y
)
{\displaystyle P(\theta \mid y)={\frac {P(\theta ,y)}{P(y)}}={\frac {P(y\mid \theta )P(\theta )}{P(y)}}}
This equation, showing the relationship between the conditional probability and the individual events, is known as Bayes' theorem. This simple expression encapsulates the technical core of Bayesian inference which aims to incorporate the updated belief,
P
(
θ
∣
y
)
{\displaystyle P(\theta \mid y)}
, in appropriate and solvable ways.
Exchangeability
The usual starting point of a statistical analysis is the assumption that the n values
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
are exchangeable. If no information – other than data y – is available to distinguish any of the
θ
j
{\displaystyle \theta _{j}}
’s from any others, and no ordering or grouping of the parameters can be made, one must assume symmetry among the parameters in their prior distribution. This symmetry is represented probabilistically by exchangeability. Generally, it is useful and appropriate to model data from an exchangeable distribution as independently and identically distributed, given some unknown parameter vector
θ
{\displaystyle \theta }
, with distribution
P
(
θ
)
{\displaystyle P(\theta )}
.
= Finite exchangeability
=For a fixed number n, the set
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
is exchangeable if the joint probability
P
(
y
1
,
y
2
,
…
,
y
n
)
{\displaystyle P(y_{1},y_{2},\ldots ,y_{n})}
is invariant under permutations of the indices. That is, for every permutation
π
{\displaystyle \pi }
or
(
π
1
,
π
2
,
…
,
π
n
)
{\displaystyle (\pi _{1},\pi _{2},\ldots ,\pi _{n})}
of (1, 2, …, n),
P
(
y
1
,
y
2
,
…
,
y
n
)
=
P
(
y
π
1
,
y
π
2
,
…
,
y
π
n
)
.
{\displaystyle P(y_{1},y_{2},\ldots ,y_{n})=P(y_{\pi _{1}},y_{\pi _{2}},\ldots ,y_{\pi _{n}}).}
Following is an exchangeable, but not independent and identical (iid), example:
Consider an urn with a red ball and a blue ball inside, with probability
1
2
{\displaystyle {\frac {1}{2}}}
of drawing either. Balls are drawn without replacement, i.e. after one ball is drawn from the n balls, there will be n − 1 remaining balls left for the next draw.
Let
Y
i
=
{
1
,
if the
i
th ball is red
,
0
,
otherwise
.
{\displaystyle {\text{Let }}Y_{i}={\begin{cases}1,&{\text{if the }}i{\text{th ball is red}},\\0,&{\text{otherwise}}.\end{cases}}}
Since the probability of selecting a red ball in the first draw and a blue ball in the second draw is equal to the probability of selecting a blue ball on the first draw and a red on the second draw, both of which are equal to 1/2 (i.e.
[
P
(
y
1
=
1
,
y
2
=
0
)
=
P
(
y
1
=
0
,
y
2
=
1
)
=
1
2
]
{\displaystyle [P(y_{1}=1,y_{2}=0)=P(y_{1}=0,y_{2}=1)={\frac {1}{2}}]}
), then
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
are exchangeable.
But the probability of selecting a red ball on the second draw given that the red ball has already been selected in the first draw is 0, and is not equal to the probability that the red ball is selected in the second draw which is equal to 1/2 (i.e.
[
P
(
y
2
=
1
∣
y
1
=
1
)
=
0
≠
P
(
y
2
=
1
)
=
1
2
]
{\displaystyle [P(y_{2}=1\mid y_{1}=1)=0\neq P(y_{2}=1)={\frac {1}{2}}]}
). Thus,
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
are not independent.
If
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are independent and identically distributed, then they are exchangeable, but the converse is not necessarily true.
= Infinite exchangeability
=Infinite exchangeability is the property that every finite subset of an infinite sequence
y
1
{\displaystyle y_{1}}
,
y
2
,
…
{\displaystyle y_{2},\ldots }
is exchangeable. That is, for any n, the sequence
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
is exchangeable.
Hierarchical models
= Components
=Bayesian hierarchical modeling makes use of two important concepts in deriving the posterior distribution, namely:
Hyperparameters: parameters of the prior distribution
Hyperpriors: distributions of Hyperparameters
Suppose a random variable Y follows a normal distribution with parameter
θ
{\displaystyle \theta }
as the mean and 1 as the variance, that is
Y
∣
θ
∼
N
(
θ
,
1
)
{\displaystyle Y\mid \theta \sim N(\theta ,1)}
. The tilde relation
∼
{\displaystyle \sim }
can be read as "has the distribution of" or "is distributed as". Suppose also that the parameter
θ
{\displaystyle \theta }
has a distribution given by a normal distribution with mean
μ
{\displaystyle \mu }
and variance 1, i.e.
θ
∣
μ
∼
N
(
μ
,
1
)
{\displaystyle \theta \mid \mu \sim N(\mu ,1)}
. Furthermore,
μ
{\displaystyle \mu }
follows another distribution given, for example, by the standard normal distribution,
N
(
0
,
1
)
{\displaystyle {\text{N}}(0,1)}
. The parameter
μ
{\displaystyle \mu }
is called the hyperparameter, while its distribution given by
N
(
0
,
1
)
{\displaystyle {\text{N}}(0,1)}
is an example of a hyperprior distribution. The notation of the distribution of Y changes as another parameter is added, i.e.
Y
∣
θ
,
μ
∼
N
(
θ
,
1
)
{\displaystyle Y\mid \theta ,\mu \sim N(\theta ,1)}
. If there is another stage, say,
μ
{\displaystyle \mu }
follows another normal distribution with mean
β
{\displaystyle \beta }
and variance
ϵ
{\displaystyle \epsilon }
, meaning
μ
∼
N
(
β
,
ϵ
)
{\displaystyle \mu \sim N(\beta ,\epsilon )}
,
{\displaystyle {\mbox{ }}}
β
{\displaystyle \beta }
and
ϵ
{\displaystyle \epsilon }
can also be called hyperparameters while their distributions are hyperprior distributions as well.
= Framework
=Let
y
j
{\displaystyle y_{j}}
be an observation and
θ
j
{\displaystyle \theta _{j}}
a parameter governing the data generating process for
y
j
{\displaystyle y_{j}}
. Assume further that the parameters
θ
1
,
θ
2
,
…
,
θ
j
{\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{j}}
are generated exchangeably from a common population, with distribution governed by a hyperparameter
ϕ
{\displaystyle \phi }
.
The Bayesian hierarchical model contains the following stages:
Stage I:
y
j
∣
θ
j
,
ϕ
∼
P
(
y
j
∣
θ
j
,
ϕ
)
{\displaystyle {\text{Stage I: }}y_{j}\mid \theta _{j},\phi \sim P(y_{j}\mid \theta _{j},\phi )}
Stage II:
θ
j
∣
ϕ
∼
P
(
θ
j
∣
ϕ
)
{\displaystyle {\text{Stage II: }}\theta _{j}\mid \phi \sim P(\theta _{j}\mid \phi )}
Stage III:
ϕ
∼
P
(
ϕ
)
{\displaystyle {\text{Stage III: }}\phi \sim P(\phi )}
The likelihood, as seen in stage I is
P
(
y
j
∣
θ
j
,
ϕ
)
{\displaystyle P(y_{j}\mid \theta _{j},\phi )}
, with
P
(
θ
j
,
ϕ
)
{\displaystyle P(\theta _{j},\phi )}
as its prior distribution. Note that the likelihood depends on
ϕ
{\displaystyle \phi }
only through
θ
j
{\displaystyle \theta _{j}}
.
The prior distribution from stage I can be broken down into:
P
(
θ
j
,
ϕ
)
=
P
(
θ
j
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta _{j},\phi )=P(\theta _{j}\mid \phi )P(\phi )}
[from the definition of conditional probability]
With
ϕ
{\displaystyle \phi }
as its hyperparameter with hyperprior distribution,
P
(
ϕ
)
{\displaystyle P(\phi )}
.
Thus, the posterior distribution is proportional to:
P
(
ϕ
,
θ
j
∣
y
)
∝
P
(
y
j
∣
θ
j
,
ϕ
)
P
(
θ
j
,
ϕ
)
{\displaystyle P(\phi ,\theta _{j}\mid y)\propto P(y_{j}\mid \theta _{j},\phi )P(\theta _{j},\phi )}
[using Bayes' Theorem]
P
(
ϕ
,
θ
j
∣
y
)
∝
P
(
y
j
∣
θ
j
)
P
(
θ
j
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\phi ,\theta _{j}\mid y)\propto P(y_{j}\mid \theta _{j})P(\theta _{j}\mid \phi )P(\phi )}
= Example
=To further illustrate this, consider the example:
A teacher wants to estimate how well a student did on the SAT. The teacher uses information on the student’s high school grades and current grade point average (GPA) to come up with an estimate. The student's current GPA, denoted by
Y
{\displaystyle Y}
, has a likelihood given by some probability function with parameter
θ
{\displaystyle \theta }
, i.e.
Y
∣
θ
∼
P
(
Y
∣
θ
)
{\displaystyle Y\mid \theta \sim P(Y\mid \theta )}
. This parameter
θ
{\displaystyle \theta }
is the SAT score of the student. The SAT score is viewed as a sample coming from a common population distribution indexed by another parameter
ϕ
{\displaystyle \phi }
, which is the high school grade of the student (freshman, sophomore, junior or senior). That is,
θ
∣
ϕ
∼
P
(
θ
∣
ϕ
)
{\displaystyle \theta \mid \phi \sim P(\theta \mid \phi )}
. Moreover, the hyperparameter
ϕ
{\displaystyle \phi }
follows its own distribution given by
P
(
ϕ
)
{\displaystyle P(\phi )}
, a hyperprior.
To solve for the SAT score given information on the GPA,
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
,
ϕ
)
P
(
θ
,
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta ,\phi )P(\theta ,\phi )}
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi )}
All information in the problem will be used to solve for the posterior distribution. Instead of solving only using the prior distribution and the likelihood function, the use of hyperpriors gives more information to make more accurate beliefs in the behavior of a parameter.
= 2-stage hierarchical model
=In general, the joint posterior distribution of interest in 2-stage hierarchical models is:
P
(
θ
,
ϕ
∣
Y
)
=
P
(
Y
∣
θ
,
ϕ
)
P
(
θ
,
ϕ
)
P
(
Y
)
=
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
P
(
Y
)
{\displaystyle P(\theta ,\phi \mid Y)={P(Y\mid \theta ,\phi )P(\theta ,\phi ) \over P(Y)}={P(Y\mid \theta )P(\theta \mid \phi )P(\phi ) \over P(Y)}}
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi )}
= 3-stage hierarchical model
=For 3-stage hierarchical models, the posterior distribution is given by:
P
(
θ
,
ϕ
,
X
∣
Y
)
=
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
∣
X
)
P
(
X
)
P
(
Y
)
{\displaystyle P(\theta ,\phi ,X\mid Y)={P(Y\mid \theta )P(\theta \mid \phi )P(\phi \mid X)P(X) \over P(Y)}}
P
(
θ
,
ϕ
,
X
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
∣
X
)
P
(
X
)
{\displaystyle P(\theta ,\phi ,X\mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi \mid X)P(X)}
Bayesian nonlinear mixed-effects model
The framework of Bayesian hierarchical modeling is frequently used in diverse applications. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\quad \epsilon _{ij}\sim N(0,\sigma ^{2}),\quad i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
η
l
i
∼
N
(
0
,
ω
l
2
)
,
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle \theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\quad \eta _{li}\sim N(0,\omega _{l}^{2}),\quad i=1,\ldots ,N,\,l=1,\ldots ,K.}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
α
l
∼
π
(
α
l
)
,
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
ω
l
2
∼
π
(
ω
l
2
)
,
l
=
1
,
…
,
K
.
{\displaystyle \sigma ^{2}\sim \pi (\sigma ^{2}),\quad \alpha _{l}\sim \pi (\alpha _{l}),\quad (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\quad \omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\quad l=1,\ldots ,K.}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters.
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
. Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
⏟
S
t
a
g
e
1
:
I
n
d
i
v
i
d
u
a
l
−
L
e
v
e
l
M
o
d
e
l
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
2
:
P
o
p
u
l
a
t
i
o
n
M
o
d
e
l
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
3
:
P
r
i
o
r
{\displaystyle =\underbrace {\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})} _{Stage1:Individual-LevelModel}\times \underbrace {\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage2:PopulationModel}\times \underbrace {p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage3:Prior}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
References
Kata Kunci Pencarian:
- Andrew Gelman
- Statistika nonparametrik
- Perubahan iklim di Bangladesh
- Bayesian hierarchical modeling
- Bayesian statistics
- Bayesian network
- Multilevel model
- List of things named after Thomas Bayes
- Nonlinear mixed-effects model
- Empirical Bayes method
- Domain adaptation
- Compound probability distribution
- Bag-of-words model in computer vision