- Source: Bernoulli distribution
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability
p
{\displaystyle p}
and the value 0 with probability
q
=
1
−
p
{\displaystyle q=1-p}
. Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are Boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and p would be the probability of tails). In particular, unfair coins would have
p
≠
1
/
2.
{\displaystyle p\neq 1/2.}
The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1.
Properties
If
X
{\displaystyle X}
is a random variable with a Bernoulli distribution, then:
Pr
(
X
=
1
)
=
p
=
1
−
Pr
(
X
=
0
)
=
1
−
q
.
{\displaystyle \Pr(X=1)=p=1-\Pr(X=0)=1-q.}
The probability mass function
f
{\displaystyle f}
of this distribution, over possible outcomes k, is
f
(
k
;
p
)
=
{
p
if
k
=
1
,
q
=
1
−
p
if
k
=
0.
{\displaystyle f(k;p)={\begin{cases}p&{\text{if }}k=1,\\q=1-p&{\text{if }}k=0.\end{cases}}}
This can also be expressed as
f
(
k
;
p
)
=
p
k
(
1
−
p
)
1
−
k
for
k
∈
{
0
,
1
}
{\displaystyle f(k;p)=p^{k}(1-p)^{1-k}\quad {\text{for }}k\in \{0,1\}}
or as
f
(
k
;
p
)
=
p
k
+
(
1
−
p
)
(
1
−
k
)
for
k
∈
{
0
,
1
}
.
{\displaystyle f(k;p)=pk+(1-p)(1-k)\quad {\text{for }}k\in \{0,1\}.}
The Bernoulli distribution is a special case of the binomial distribution with
n
=
1.
{\displaystyle n=1.}
The kurtosis goes to infinity for high and low values of
p
,
{\displaystyle p,}
but for
p
=
1
/
2
{\displaystyle p=1/2}
the two-point distributions including the Bernoulli distribution have a lower excess kurtosis, namely −2, than any other probability distribution.
The Bernoulli distributions for
0
≤
p
≤
1
{\displaystyle 0\leq p\leq 1}
form an exponential family.
The maximum likelihood estimator of
p
{\displaystyle p}
based on a random sample is the sample mean.
Mean
The expected value of a Bernoulli random variable
X
{\displaystyle X}
is
E
[
X
]
=
p
{\displaystyle \operatorname {E} [X]=p}
This is due to the fact that for a Bernoulli distributed random variable
X
{\displaystyle X}
with
Pr
(
X
=
1
)
=
p
{\displaystyle \Pr(X=1)=p}
and
Pr
(
X
=
0
)
=
q
{\displaystyle \Pr(X=0)=q}
we find
E
[
X
]
=
Pr
(
X
=
1
)
⋅
1
+
Pr
(
X
=
0
)
⋅
0
=
p
⋅
1
+
q
⋅
0
=
p
.
{\displaystyle \operatorname {E} [X]=\Pr(X=1)\cdot 1+\Pr(X=0)\cdot 0=p\cdot 1+q\cdot 0=p.}
Variance
The variance of a Bernoulli distributed
X
{\displaystyle X}
is
Var
[
X
]
=
p
q
=
p
(
1
−
p
)
{\displaystyle \operatorname {Var} [X]=pq=p(1-p)}
We first find
E
[
X
2
]
=
Pr
(
X
=
1
)
⋅
1
2
+
Pr
(
X
=
0
)
⋅
0
2
{\displaystyle \operatorname {E} [X^{2}]=\Pr(X=1)\cdot 1^{2}+\Pr(X=0)\cdot 0^{2}}
=
p
⋅
1
2
+
q
⋅
0
2
=
p
=
E
[
X
]
{\displaystyle =p\cdot 1^{2}+q\cdot 0^{2}=p=\operatorname {E} [X]}
From this follows
Var
[
X
]
=
E
[
X
2
]
−
E
[
X
]
2
=
E
[
X
]
−
E
[
X
]
2
{\displaystyle \operatorname {Var} [X]=\operatorname {E} [X^{2}]-\operatorname {E} [X]^{2}=\operatorname {E} [X]-\operatorname {E} [X]^{2}}
=
p
−
p
2
=
p
(
1
−
p
)
=
p
q
{\displaystyle =p-p^{2}=p(1-p)=pq}
With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside
[
0
,
1
/
4
]
{\displaystyle [0,1/4]}
.
Skewness
The skewness is
q
−
p
p
q
=
1
−
2
p
p
q
{\displaystyle {\frac {q-p}{\sqrt {pq}}}={\frac {1-2p}{\sqrt {pq}}}}
. When we take the standardized Bernoulli distributed random variable
X
−
E
[
X
]
Var
[
X
]
{\displaystyle {\frac {X-\operatorname {E} [X]}{\sqrt {\operatorname {Var} [X]}}}}
we find that this random variable attains
q
p
q
{\displaystyle {\frac {q}{\sqrt {pq}}}}
with probability
p
{\displaystyle p}
and attains
−
p
p
q
{\displaystyle -{\frac {p}{\sqrt {pq}}}}
with probability
q
{\displaystyle q}
. Thus we get
γ
1
=
E
[
(
X
−
E
[
X
]
Var
[
X
]
)
3
]
=
p
⋅
(
q
p
q
)
3
+
q
⋅
(
−
p
p
q
)
3
=
1
p
q
3
(
p
q
3
−
q
p
3
)
=
p
q
p
q
3
(
q
2
−
p
2
)
=
(
1
−
p
)
2
−
p
2
p
q
=
1
−
2
p
p
q
=
q
−
p
p
q
.
{\displaystyle {\begin{aligned}\gamma _{1}&=\operatorname {E} \left[\left({\frac {X-\operatorname {E} [X]}{\sqrt {\operatorname {Var} [X]}}}\right)^{3}\right]\\&=p\cdot \left({\frac {q}{\sqrt {pq}}}\right)^{3}+q\cdot \left(-{\frac {p}{\sqrt {pq}}}\right)^{3}\\&={\frac {1}{{\sqrt {pq}}^{3}}}\left(pq^{3}-qp^{3}\right)\\&={\frac {pq}{{\sqrt {pq}}^{3}}}(q^{2}-p^{2})\\&={\frac {(1-p)^{2}-p^{2}}{\sqrt {pq}}}\\&={\frac {1-2p}{\sqrt {pq}}}={\frac {q-p}{\sqrt {pq}}}.\end{aligned}}}
Higher moments and cumulants
The raw moments are all equal due to the fact that
1
k
=
1
{\displaystyle 1^{k}=1}
and
0
k
=
0
{\displaystyle 0^{k}=0}
.
E
[
X
k
]
=
Pr
(
X
=
1
)
⋅
1
k
+
Pr
(
X
=
0
)
⋅
0
k
=
p
⋅
1
+
q
⋅
0
=
p
=
E
[
X
]
.
{\displaystyle \operatorname {E} [X^{k}]=\Pr(X=1)\cdot 1^{k}+\Pr(X=0)\cdot 0^{k}=p\cdot 1+q\cdot 0=p=\operatorname {E} [X].}
The central moment of order
k
{\displaystyle k}
is given by
μ
k
=
(
1
−
p
)
(
−
p
)
k
+
p
(
1
−
p
)
k
.
{\displaystyle \mu _{k}=(1-p)(-p)^{k}+p(1-p)^{k}.}
The first six central moments are
μ
1
=
0
,
μ
2
=
p
(
1
−
p
)
,
μ
3
=
p
(
1
−
p
)
(
1
−
2
p
)
,
μ
4
=
p
(
1
−
p
)
(
1
−
3
p
(
1
−
p
)
)
,
μ
5
=
p
(
1
−
p
)
(
1
−
2
p
)
(
1
−
2
p
(
1
−
p
)
)
,
μ
6
=
p
(
1
−
p
)
(
1
−
5
p
(
1
−
p
)
(
1
−
p
(
1
−
p
)
)
)
.
{\displaystyle {\begin{aligned}\mu _{1}&=0,\\\mu _{2}&=p(1-p),\\\mu _{3}&=p(1-p)(1-2p),\\\mu _{4}&=p(1-p)(1-3p(1-p)),\\\mu _{5}&=p(1-p)(1-2p)(1-2p(1-p)),\\\mu _{6}&=p(1-p)(1-5p(1-p)(1-p(1-p))).\end{aligned}}}
The higher central moments can be expressed more compactly in terms of
μ
2
{\displaystyle \mu _{2}}
and
μ
3
{\displaystyle \mu _{3}}
μ
4
=
μ
2
(
1
−
3
μ
2
)
,
μ
5
=
μ
3
(
1
−
2
μ
2
)
,
μ
6
=
μ
2
(
1
−
5
μ
2
(
1
−
μ
2
)
)
.
{\displaystyle {\begin{aligned}\mu _{4}&=\mu _{2}(1-3\mu _{2}),\\\mu _{5}&=\mu _{3}(1-2\mu _{2}),\\\mu _{6}&=\mu _{2}(1-5\mu _{2}(1-\mu _{2})).\end{aligned}}}
The first six cumulants are
κ
1
=
p
,
κ
2
=
μ
2
,
κ
3
=
μ
3
,
κ
4
=
μ
2
(
1
−
6
μ
2
)
,
κ
5
=
μ
3
(
1
−
12
μ
2
)
,
κ
6
=
μ
2
(
1
−
30
μ
2
(
1
−
4
μ
2
)
)
.
{\displaystyle {\begin{aligned}\kappa _{1}&=p,\\\kappa _{2}&=\mu _{2},\\\kappa _{3}&=\mu _{3},\\\kappa _{4}&=\mu _{2}(1-6\mu _{2}),\\\kappa _{5}&=\mu _{3}(1-12\mu _{2}),\\\kappa _{6}&=\mu _{2}(1-30\mu _{2}(1-4\mu _{2})).\end{aligned}}}
Entropy and Fisher's Information
= Entropy
=Entropy is a measure of uncertainty or randomness in a probability distribution. For a Bernoulli random variable
X
{\displaystyle X}
with success probability
p
{\displaystyle p}
and failure probability
q
=
1
−
p
{\displaystyle q=1-p}
, the entropy
H
(
X
)
{\displaystyle H(X)}
is defined as:
H
(
X
)
=
E
p
ln
(
1
P
(
X
)
)
=
−
[
P
(
X
=
0
)
ln
P
(
X
=
0
)
+
P
(
X
=
1
)
ln
P
(
X
=
1
)
]
H
(
X
)
=
−
(
q
ln
q
+
p
ln
p
)
,
q
=
P
(
X
=
0
)
,
p
=
P
(
X
=
1
)
{\displaystyle {\begin{aligned}H(X)&=\mathbb {E} _{p}\ln({\frac {1}{P(X)}})=-[P(X=0)\ln P(X=0)+P(X=1)\ln P(X=1)]\\H(X)&=-(q\ln q+p\ln p),\quad q=P(X=0),p=P(X=1)\end{aligned}}}
The entropy is maximized when
p
=
0.5
{\displaystyle p=0.5}
, indicating the highest level of uncertainty when both outcomes are equally likely. The entropy is zero when
p
=
0
{\displaystyle p=0}
or
p
=
1
{\displaystyle p=1}
, where one outcome is certain.
= Fisher's Information
=Fisher information measures the amount of information that an observable random variable
X
{\displaystyle X}
carries about an unknown parameter
p
{\displaystyle p}
upon which the probability of
X
{\displaystyle X}
depends. For the Bernoulli distribution, the Fisher information with respect to the parameter
p
{\displaystyle p}
is given by:
I
(
p
)
=
1
p
q
{\displaystyle {\begin{aligned}I(p)={\frac {1}{pq}}\end{aligned}}}
Proof:
The Likelihood Function for a Bernoulli random variable
X
{\displaystyle X}
is:
L
(
p
;
X
)
=
p
X
(
1
−
p
)
1
−
X
{\displaystyle {\begin{aligned}L(p;X)=p^{X}(1-p)^{1-X}\end{aligned}}}
This represents the probability of observing
X
{\displaystyle X}
given the parameter
p
{\displaystyle p}
.
The Log-Likelihood Function is:
ln
L
(
p
;
X
)
=
X
ln
p
+
(
1
−
X
)
ln
(
1
−
p
)
{\displaystyle {\begin{aligned}\ln L(p;X)=X\ln p+(1-X)\ln(1-p)\end{aligned}}}
The Score Function (the first derivative of the log-likelihood w.r.t.
p
{\displaystyle p}
is:
∂
∂
p
ln
L
(
p
;
X
)
=
X
p
−
1
−
X
1
−
p
{\displaystyle {\begin{aligned}{\frac {\partial }{\partial p}}\ln L(p;X)={\frac {X}{p}}-{\frac {1-X}{1-p}}\end{aligned}}}
The second derivative of the log-likelihood function is:
∂
2
∂
p
2
ln
L
(
p
;
X
)
=
−
X
p
2
−
1
−
X
(
1
−
p
)
2
{\displaystyle {\begin{aligned}{\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)=-{\frac {X}{p^{2}}}-{\frac {1-X}{(1-p)^{2}}}\end{aligned}}}
Fisher information is calculated as the negative expected value of the second derivative of the log-likelihood:
I
(
p
)
=
−
E
[
∂
2
∂
p
2
ln
L
(
p
;
X
)
]
=
−
(
−
p
p
2
−
1
−
p
(
1
−
p
)
2
)
=
1
p
(
1
−
p
)
=
1
p
q
{\displaystyle {\begin{aligned}I(p)=-E\left[{\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)\right]=-\left(-{\frac {p}{p^{2}}}-{\frac {1-p}{(1-p)^{2}}}\right)={\frac {1}{p(1-p)}}={\frac {1}{pq}}\end{aligned}}}
It is maximized when
p
=
0.5
{\displaystyle p=0.5}
, reflecting maximum uncertainty and thus maximum information about the parameter
p
{\displaystyle p}
.
Related distributions
If
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}}
are independent, identically distributed (i.i.d.) random variables, all Bernoulli trials with success probability p, then their sum is distributed according to a binomial distribution with parameters n and p:
∑
k
=
1
n
X
k
∼
B
(
n
,
p
)
{\displaystyle \sum _{k=1}^{n}X_{k}\sim \operatorname {B} (n,p)}
(binomial distribution).
The Bernoulli distribution is simply
B
(
1
,
p
)
{\displaystyle \operatorname {B} (1,p)}
, also written as
B
e
r
n
o
u
l
l
i
(
p
)
.
{\textstyle \mathrm {Bernoulli} (p).}
The categorical distribution is the generalization of the Bernoulli distribution for variables with any constant number of discrete values.
The Beta distribution is the conjugate prior of the Bernoulli distribution.
The geometric distribution models the number of independent and identical Bernoulli trials needed to get one success.
If
Y
∼
B
e
r
n
o
u
l
l
i
(
1
2
)
{\textstyle Y\sim \mathrm {Bernoulli} \left({\frac {1}{2}}\right)}
, then
2
Y
−
1
{\textstyle 2Y-1}
has a Rademacher distribution.
See also
Bernoulli process, a random process consisting of a sequence of independent Bernoulli trials
Bernoulli sampling
Binary entropy function
Binary decision diagram
References
Further reading
Johnson, N. L.; Kotz, S.; Kemp, A. (1993). Univariate Discrete Distributions (2nd ed.). Wiley. ISBN 0-471-54897-9.
Peatman, John G. (1963). Introduction to Applied Statistics. New York: Harper & Row. pp. 162–171.
External links
"Binomial distribution", Encyclopedia of Mathematics, EMS Press, 2001 [1994].
Weisstein, Eric W. "Bernoulli Distribution". MathWorld.
Interactive graphic: Univariate Distribution Relationships.
Kata Kunci Pencarian:
- Jacob Bernoulli
- Distribusi binomial
- Statistika
- Ilmu aktuaria
- Statistika matematika
- James Clerk Maxwell
- Variabel acak
- Model generatif
- Efek pengacau
- Eksperimen semu
- Bernoulli distribution
- Binomial distribution
- Continuous Bernoulli distribution
- Bernoulli trial
- Categorical distribution
- Bernoulli process
- Multinomial distribution
- Beta distribution
- Bernoulli family
- List of things named after members of the Bernoulli family