- Source: Variance reduction
In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates obtained for a given simulation or computational effort. Every output random variable from the simulation is associated with a variance which limits the precision of the simulation results. In order to make a simulation statistically efficient, i.e., to obtain a greater precision and smaller confidence intervals for the output random variable of interest, variance reduction techniques can be used.
The main variance reduction methods are
common random numbers
antithetic variates
control variates
importance sampling
stratified sampling
moment matching
conditional Monte Carlo
and quasi random variables (in Quasi-Monte Carlo method)
For simulation with black-box models subset simulation and line sampling can also be used. Under these headings are a variety of specialized techniques; for example, particle transport simulations make extensive use of "weight windows" and "splitting/Russian roulette" techniques, which are a form of importance sampling.
Crude Monte Carlo simulation
Suppose one wants to compute
z
:=
E
(
Z
)
{\displaystyle z:=E(Z)}
with the random variable
Z
{\displaystyle Z}
defined on the probability space
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},P)}
. Monte Carlo does this by sampling i.i.d. copies
Z
1
,
.
.
.
,
Z
R
{\displaystyle Z_{1},...,Z_{R}}
of
Z
{\displaystyle Z}
and then to estimate
z
{\displaystyle z}
via the sample-mean estimator
z
¯
=
1
n
∑
i
=
1
n
Z
i
{\displaystyle {\overline {z}}={\frac {1}{n}}\sum _{i=1}^{n}Z_{i}}
Under further mild conditions such as
v
a
r
(
Z
)
<
∞
{\displaystyle var(Z)<\infty }
, a central limit theorem will apply such that for large
n
→
∞
{\displaystyle n\rightarrow \infty }
, the distribution of
z
¯
{\displaystyle {\overline {z}}}
converges to a normal distribution with mean
z
{\displaystyle z}
and standard error
σ
/
n
{\displaystyle \sigma /{\sqrt {n}}}
. Because the standard deviation only converges towards
0
{\displaystyle 0}
at the rate
n
{\displaystyle {\sqrt {n}}}
, implying one needs to increase the number of simulations (
n
{\displaystyle n}
) by a factor of
4
{\displaystyle 4}
to halve the standard deviation of
z
¯
{\displaystyle {\overline {z}}}
, variance reduction methods are often useful for obtaining more precise estimates for
z
{\displaystyle z}
without needing very large numbers of simulations.
Common Random Numbers (CRN)
The common random numbers variance reduction technique is a popular and useful variance reduction technique which applies when we are comparing two or more alternative configurations (of a system) instead of investigating a single configuration. CRN has also been called correlated sampling, matched streams or matched pairs.
CRN requires synchronization of the random number streams, which ensures that in addition to using the same random numbers to simulate all configurations, a specific random number used for a specific purpose in one configuration is used for exactly the same purpose in all other configurations. For example, in queueing theory, if we are comparing two different configurations of tellers in a bank, we would want the (random) time of arrival of the N-th customer to be generated using the same draw from a random number stream for both configurations.
Underlying principle of the CRN technique
Suppose
X
1
j
{\displaystyle X_{1j}}
and
X
2
j
{\displaystyle X_{2j}}
are the observations from the first and second configurations on the j-th independent replication.
We want to estimate
ξ
=
E
(
X
1
j
)
−
E
(
X
2
j
)
=
μ
1
−
μ
2
.
{\displaystyle \xi =E(X_{1j})-E(X_{2j})=\mu _{1}-\mu _{2}.\,}
If we perform n replications of each configuration and let
Z
j
=
X
1
j
−
X
2
j
for
j
=
1
,
2
,
…
,
n
,
{\displaystyle Z_{j}=X_{1j}-X_{2j}\quad {\mbox{for }}j=1,2,\ldots ,n,}
then
E
(
Z
j
)
=
ξ
{\displaystyle E(Z_{j})=\xi }
and
Z
(
n
)
=
∑
j
=
1
,
…
,
n
Z
j
n
{\displaystyle Z(n)={\frac {\sum _{j=1,\ldots ,n}Z_{j}}{n}}}
is an unbiased estimator of
ξ
{\displaystyle \xi }
.
And since the
Z
j
{\displaystyle Z_{j}}
's are independent identically distributed random variables,
Var
[
Z
(
n
)
]
=
Var
(
Z
j
)
n
=
Var
[
X
1
j
]
+
Var
[
X
2
j
]
−
2
Cov
[
X
1
j
,
X
2
j
]
n
.
{\displaystyle \operatorname {Var} [Z(n)]={\frac {\operatorname {Var} (Z_{j})}{n}}={\frac {\operatorname {Var} [X_{1j}]+\operatorname {Var} [X_{2j}]-2\operatorname {Cov} [X_{1j},X_{2j}]}{n}}.}
In case of independent sampling, i.e., no common random numbers used then Cov(X1j, X2j) = 0. But if we succeed to induce an element of positive correlation between X1 and X2 such that Cov(X1j, X2j) > 0, it can be seen from the equation above that the variance is reduced.
It can also be observed that if the CRN induces a negative correlation, i.e., Cov(X1j, X2j) < 0, this technique can actually backfire, where the variance is increased and not decreased (as intended).
See also
Explained variance
Regularization (mathematics)
References
Hammersley, J. M.; Handscomb, D. C. (1964). Monte Carlo Methods. London: Methuen. ISBN 0-416-52340-4.
Kahn, H.; Marshall, A. W. (1953). "Methods of Reducing Sample Size in Monte Carlo Computations". Journal of the Operations Research Society of America. 1 (5): 263–271. doi:10.1287/opre.1.5.263.
MCNP — A General Monte Carlo N-Particle Transport Code, Version 5 Los Alamos Report LA-UR-03-1987
Kata Kunci Pencarian:
- Globalisasi
- Kesenjangan ekonomi
- Variance reduction
- Stochastic variance reduction
- Decision tree learning
- Consistent hashing
- Reduction
- Importance sampling
- Principal component regression
- Monte Carlo methods in finance
- Inverse probability weighting
- Linear regression