- Source: Bernoulli sampling
In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample. An essential property of Bernoulli sampling is that all elements of the population have equal probability of being included in the sample.
Bernoulli sampling is therefore a special case of Poisson sampling. In Poisson sampling each element of the population may have a different probability of being included in the sample. In Bernoulli sampling, the probability is equal for all the elements.
Because each element of the population is considered separately for the sample, the sample size is not fixed but rather follows a binomial distribution.
Example
The most basic Bernoulli method generates n random variates to extract a sample from a population of n items. Suppose you want to extract a given percentage pct of the population. The algorithm can be described as follows:
for each item in the set
generate a random non-negative integer R
if (R mod 100) < pct then
select item
A percentage of 20%, say, is usually expressed as a probability p=0.2. In that case, random variates are generated in the unit interval. After running the algorithm, a sample of size k will have been selected. One would expect to have
k
≈
n
⋅
p
{\displaystyle k\approx n\cdot p}
, which is more and more likely as n grows. In fact, It is possible to calculate the probability of obtaining a sample size of k by the Binomial distribution:
f
(
k
,
n
,
p
)
=
(
n
k
)
p
k
(
1
−
p
)
n
−
k
{\displaystyle f(k,n,p)={\binom {n}{k}}p^{k}(1-p)^{n-k}}
On the left this function is shown for four values of
n
{\displaystyle n}
and
p
=
0.2
{\displaystyle p=0.2}
. In order to compare the values for different values of
n
{\displaystyle n}
, the
k
{\displaystyle k}
's in abscissa are scaled from
[
0
,
n
]
{\displaystyle \left[0,n\right]}
to the unit interval, while the value of the function, in ordinate, is multiplied by the inverse, so that the area under the graph maintains the same value —that area is related to the corresponding cumulative distribution function. The values are shown in logarithmic scale.
On the right the minimum values of
n
{\displaystyle n}
that satisfy given error bounds with 95% probability. Given an error, the set of
k
{\displaystyle k}
's within bounds can be described as follows:
K
n
,
p
=
{
k
∈
N
:
|
k
n
−
p
|
<
e
r
r
o
r
}
{\displaystyle K_{n,p}=\left\{k\in \mathbb {N} :\left\vert {\frac {k}{n}}-p\right\vert <\mathrm {error} \right\}}
The probability to end up within
K
{\displaystyle K}
is given again by the binomial distribution as:
∑
k
∈
K
f
(
k
,
n
,
p
)
.
{\displaystyle \sum _{k\in K}f(k,n,p).}
The picture shows the lowest values of
n
{\displaystyle n}
such that the sum is at least 0.95. For
p
=
0.0
{\displaystyle p=0.0}
and
p
=
1.00
{\displaystyle p=1.00}
the algorithm delivers exact results for all
n
{\displaystyle n}
's. The
p
{\displaystyle p}
's in between are obtained by bisection. Note that, if
100
⋅
p
{\displaystyle 100\cdot p}
is an integer percentage,
e
r
r
o
r
=
0.005
{\displaystyle \mathrm {error} =0.005}
, guarantees that
100
⋅
k
/
n
=
100
⋅
p
{\displaystyle 100\cdot k/n=100\cdot p}
. Values as high as
n
=
38400
{\displaystyle n=38400}
can be required for such an exact match.
See also
Poisson sampling
Bernoulli trial
Bernoulli process
Sampling design
References
External links
Faster Random Samples With Gap Sampling
Kata Kunci Pencarian:
- Jacob Bernoulli
- Statistika
- Variabel acak
- Ilmu aktuaria
- Statistika matematika
- Model generatif
- Efek pengacau
- Eksperimen semu
- Bernoulli sampling
- Bernoulli trial
- Poisson sampling
- Bernoulli distribution
- List of things named after members of the Bernoulli family
- Jacob Bernoulli
- Bernoulli process
- Sampling design
- Binomial distribution
- Probability-proportional-to-size sampling