- 1
- 2
- Teorema Bayes
- Statistika
- Model generatif
- Variabel acak
- Eksperimen semu
- Efek pengacau
- Uji t Student
- Statistika matematika
- Distribusi t Student
- Ilmu aktuaria
- Bayes factor
- Bayes' theorem
- Thomas Bayes
- Empirical Bayes method
- Naive Bayes classifier
- Bayes classifier
- Variational Bayesian methods
- List of things named after Thomas Bayes
- Monty Hall problem
- Bayesian information criterion
- Likelihood ratio vs Bayes Factor - Cross Validated
- r - Correct usage/understanding of Bayes Factor when comparing …
- Why is the Bayes factor sometimes considered more important …
- Why we need a prior for computing a Bayes Factor (R code …
- Bayes factor using BayesFactor package in R - Cross Validated
- hypothesis testing - Why are the cut-offs used for Bayes factors …
- Bayes-factor for testing a null-hypothesis? - Cross Validated
- Calculating Bayes Factor from a correlation coefficient
- probability - Obtaining a Bayes Factor for the difference between …
- Interpretation of absurdly large (but probably correct) Bayes …
The Day the Earth Blew Up: A Looney Tunes Movie (2024)
Bayes factor GudangMovies21 Rebahinxxi LK21
The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in question can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values). Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.
Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses. Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on MCMC samples have been suggested. A widely used approach is the method proposed by Chib (1995). Chib and Jeliazkov (2001) later extended this method to handle cases where Metropolis-Hastings samplers are used. For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative. Another approximation, derived by applying Laplace's approximation to the integrated likelihoods, is known as the Bayesian information criterion (BIC); in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be improper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite.
Definition
The Bayes factor is the ratio of two marginal likelihoods; that is, the likelihoods of two statistical models integrated over the prior probabilities of their parameters.
The posterior probability
Pr
(
M
|
D
)
{\displaystyle \Pr(M|D)}
of a model M given data D is given by Bayes' theorem:
Pr
(
M
|
D
)
=
Pr
(
D
|
M
)
Pr
(
M
)
Pr
(
D
)
.
{\displaystyle \Pr(M|D)={\frac {\Pr(D|M)\Pr(M)}{\Pr(D)}}.}
The key data-dependent term
Pr
(
D
|
M
)
{\displaystyle \Pr(D|M)}
represents the probability that some data are produced under the assumption of the model M; evaluating it correctly is the key to Bayesian model comparison.
Given a model selection problem in which one wishes to choose between two models on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors
θ
1
{\displaystyle \theta _{1}}
and
θ
2
{\displaystyle \theta _{2}}
, is assessed by the Bayes factor K given by
K
=
Pr
(
D
|
M
1
)
Pr
(
D
|
M
2
)
=
∫
Pr
(
θ
1
|
M
1
)
Pr
(
D
|
θ
1
,
M
1
)
d
θ
1
∫
Pr
(
θ
2
|
M
2
)
Pr
(
D
|
θ
2
,
M
2
)
d
θ
2
=
Pr
(
M
1
|
D
)
Pr
(
D
)
Pr
(
M
1
)
Pr
(
M
2
|
D
)
Pr
(
D
)
Pr
(
M
2
)
=
Pr
(
M
1
|
D
)
Pr
(
M
2
|
D
)
Pr
(
M
2
)
Pr
(
M
1
)
.
{\displaystyle K={\frac {\Pr(D|M_{1})}{\Pr(D|M_{2})}}={\frac {\int \Pr(\theta _{1}|M_{1})\Pr(D|\theta _{1},M_{1})\,d\theta _{1}}{\int \Pr(\theta _{2}|M_{2})\Pr(D|\theta _{2},M_{2})\,d\theta _{2}}}={\frac {\frac {\Pr(M_{1}|D)\Pr(D)}{\Pr(M_{1})}}{\frac {\Pr(M_{2}|D)\Pr(D)}{\Pr(M_{2})}}}={\frac {\Pr(M_{1}|D)}{\Pr(M_{2}|D)}}{\frac {\Pr(M_{2})}{\Pr(M_{1})}}.}
When the two models have equal prior probability, so that
Pr
(
M
1
)
=
Pr
(
M
2
)
{\displaystyle \Pr(M_{1})=\Pr(M_{2})}
, the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classical likelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure. It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,
with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.
Other approaches are:
to treat model comparison as a decision problem, computing the expected value or cost of each model choice;
to use minimum message length (MML).
to use minimum description length (MDL).
Interpretation
A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. The fact that a Bayes factor can produce evidence for and not just against a null hypothesis is one of the key advantages of this analysis method.
Harold Jeffreys gave a scale (Jeffreys' scale) for interpretation of
K
{\displaystyle K}
:
The second column gives the corresponding weights of evidence in decihartleys (also known as decibans); bits are added in the third column for clarity. The table continues in the other direction, so that, for example,
K
≤
10
−
2
{\displaystyle K\leq 10^{-2}}
is decisive evidence for
M
2
{\displaystyle M_{2}}
.
An alternative table, widely cited, is provided by Kass and Raftery (1995):
According to I. J. Good, the just-noticeable difference of humans in their everyday life, when it comes to a change degree of belief in a hypothesis, is about a factor of 1.3x, or 1 deciban, or 1/3 of a bit, or from 1:1 to 5:4 in odds ratio.
Example
Suppose we have a random variable that produces either a success or a failure. We want to compare a model M1 where the probability of success is q = 1⁄2, and another model M2 where q is unknown and we take a prior distribution for q that is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution:
(
200
115
)
q
115
(
1
−
q
)
85
.
{\displaystyle {{200 \choose 115}q^{115}(1-q)^{85}}.}
Thus we have for M1
P
(
X
=
115
∣
M
1
)
=
(
200
115
)
(
1
2
)
200
≈
0.006
{\displaystyle P(X=115\mid M_{1})={200 \choose 115}\left({1 \over 2}\right)^{200}\approx 0.006}
whereas for M2 we have
P
(
X
=
115
∣
M
2
)
=
∫
0
1
(
200
115
)
q
115
(
1
−
q
)
85
d
q
=
1
201
≈
0.005
{\displaystyle P(X=115\mid M_{2})=\int _{0}^{1}{200 \choose 115}q^{115}(1-q)^{85}dq={1 \over 201}\approx 0.005}
The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards M1.
A frequentist hypothesis test of M1 (here considered as a null hypothesis) would have produced a very different result. Such a test says that M1 should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = 1⁄2 is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.
A classical likelihood-ratio test would have found the maximum likelihood estimate for q, namely
q
^
=
115
200
=
0.575
{\displaystyle {\hat {q}}={\frac {115}{200}}=0.575}
, whence
P
(
X
=
115
∣
M
2
)
=
(
200
115
)
q
^
115
(
1
−
q
^
)
85
≈
0.06
{\displaystyle \textstyle P(X=115\mid M_{2})={{200 \choose 115}{\hat {q}}^{115}(1-{\hat {q}})^{85}}\approx 0.06}
(rather than averaging over all possible q). That gives a likelihood ratio of 0.1 and points towards M2.
M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors.
On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its Akaike information criterion (AIC) value is
2
⋅
0
−
2
⋅
ln
(
0.005956
)
≈
10.2467
{\displaystyle 2\cdot 0-2\cdot \ln(0.005956)\approx 10.2467}
. Model M2 has 1 parameter, and so its AIC value is
2
⋅
1
−
2
⋅
ln
(
0.056991
)
≈
7.7297
{\displaystyle 2\cdot 1-2\cdot \ln(0.056991)\approx 7.7297}
. Hence M1 is about
exp
(
7.7297
−
10.2467
2
)
≈
0.284
{\displaystyle \exp \left({\frac {7.7297-10.2467}{2}}\right)\approx 0.284}
times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded.
See also
Akaike information criterion
Approximate Bayesian computation
Bayesian information criterion
Deviance information criterion
Lindley's paradox
Minimum message length
Model selection
E-Value
Statistical ratios
Odds ratio
Relative risk
References
Further reading
Bernardo, J.; Smith, A. F. M. (1994). Bayesian Theory. John Wiley. ISBN 0-471-92416-4.
Denison, D. G. T.; Holmes, C. C.; Mallick, B. K.; Smith, A. F. M. (2002). Bayesian Methods for Nonlinear Classification and Regression. John Wiley. ISBN 0-471-49036-9.
Dienes, Z. (2019). How do I know what my theory predicts? Advances in Methods and Practices in Psychological Science doi:10.1177/2515245919876960
Duda, Richard O.; Hart, Peter E.; Stork, David G. (2000). "Section 9.6.5". Pattern classification (2nd ed.). Wiley. pp. 487–489. ISBN 0-471-05669-3.
Gelman, A.; Carlin, J.; Stern, H.; Rubin, D. (1995). Bayesian Data Analysis. London: Chapman & Hall. ISBN 0-412-03991-5.
Jaynes, E. T. (1994), Probability Theory: the logic of science, chapter 24.
Kadane, Joseph B.; Dickey, James M. (1980). "Bayesian Decision Theory and the Simplification of Models". In Kmenta, Jan; Ramsey, James B. (eds.). Evaluation of Econometric Models. New York: Academic Press. pp. 245–268. ISBN 0-12-416550-8.
Lee, P. M. (2012). Bayesian Statistics: an introduction. Wiley. ISBN 9781118332573.
Richard, Mark; Vecer, Jan (2021). "Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis". Risks. 9 (2): 31. doi:10.3390/risks9020031. hdl:10419/258120.
Winkler, Robert (2003). Introduction to Bayesian Inference and Decision (2nd ed.). Probabilistic. ISBN 0-9647938-4-9.
External links
BayesFactor —an R package for computing Bayes factors in common research designs
Bayes factor calculator — Online calculator for informed Bayes factors
Bayes Factor Calculators Archived 2015-05-07 at the Wayback Machine —web-based version of much of the BayesFactor package
Kata Kunci Pencarian:
Bayes Factor | PDF | Probability And Statistics | Epistemology Of Science

Bayes Factor Calculator - Calculator Academy

Bayes Factor: Simple Definition - Statistics How To

Bayes Factor Calculator - Online Calculators

Interval-based Bayes factor. (a) Geometric meaning of Bayes factor; (b ...

Bayes Factor Definition | DeepAI

What is the Bayes Factor anyway? - Dan Oehm | Gradient Descending

Complete Bayes Factor for No Feedback Conditions. The estimated ...

Sequential analysis of the Bayes factor | Download Scientific Diagram

Bayes Factors Photos, Download The BEST Free Bayes Factors Stock Photos ...

Scale of interpretation of Bayes factor | Download Scientific Diagram

Bayes Factor results. Bayes Factors were calculated from estimated ...
bayes factor
Daftar Isi
Likelihood ratio vs Bayes Factor - Cross Validated
Mar 27, 2016 · Variational Bayes is one such method. It may not only dramatically reduce the computational complexity of stochastic approximations (e.g., MCMC sampling). Variational Bayes also provide an intuitive understanding of what makes up a Bayes factor. Recall first that a Bayes factor is based on the model evidences of two competing models,
r - Correct usage/understanding of Bayes Factor when comparing …
Feb 1, 2019 · The bayes factor is the same as computing the posterior odds ratio. Want: Compute posterior odds ratio $\frac{P(H1 | x)}{P(H0 | x)}$ . We note that the denominator is just 1 - numerator.
Why is the Bayes factor sometimes considered more important …
Aug 15, 2016 · You noted that the Bayes factor can be interpreted as $(\text{posterior odds}) = (\text{Bayes factor}) \times (\text{prior odds})$. This isn't wrong, but your question evinces the danger in relying too heavily on a simple interpretation of a rich concept.
Why we need a prior for computing a Bayes Factor (R code …
Jun 21, 2017 · In general you use Bayes factors to determine the odds of one hypothesis with respect to another hypothesis. In determining both probabilities you need to calculate an integral or a sum if you put prior probability on a finite number of poin
Bayes factor using BayesFactor package in R - Cross Validated
Dec 5, 2015 · $\begingroup$ Thanks Alex! Your blog was very helpful as well. I just wanted to add that my colleague mentioned something about extractor functions for S4 objects today which led me to extractBF(x, logbf = FALSE, onlybf = FALSE).
hypothesis testing - Why are the cut-offs used for Bayes factors …
Apr 25, 2019 · These numbers are just not the same thing that goes into the Bayes factor, which is the ratio of the posterior probabilities given to each competing hypothesis. These posterior probabilities are not computed in the same way as the p-value, and so thinking of 95/5 as being like posterior probabilities that would give a BF of 19 is not correct.
Bayes-factor for testing a null-hypothesis? - Cross Validated
You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subtract the estimate for the group level parameters for each step to get a posterior distribution of the difference as was done by John Kruschke in ...
Calculating Bayes Factor from a correlation coefficient
Dec 4, 2015 · This paper has some examples of the Bayes factors being used, and there is a link in that paper for the supporting R code. The R code requires just the sample size and the observed correlation. The R code requires just the sample size and the observed correlation.
probability - Obtaining a Bayes Factor for the difference between …
Jun 21, 2017 · In general what you do for a Bayes factor is calculare a distribution by integrating over the parameter space by placing a prior on the parameter space-in this case the binomial success probability. Will write up the math and a brief R …
Interpretation of absurdly large (but probably correct) Bayes …
I estimated a Bayes factor to compare a hypothetical model against a null-model (which obviously by visual comparison of the posterior predictive with the data) fails to capture a certain aspect in the data. The sampling is rather rigid.