• Source: Multilevel Monte Carlo method
  • Multilevel Monte Carlo (MLMC) methods in numerical analysis are algorithms for computing expectations that arise in stochastic simulations. Just as Monte Carlo methods, they rely on repeated random sampling, but these samples are taken on different levels of accuracy. MLMC methods can greatly reduce the computational cost of standard Monte Carlo methods by taking most samples with a low accuracy and corresponding low cost, and only very few samples are taken at high accuracy and corresponding high cost.


    Goal


    The goal of a multilevel Monte Carlo method is to approximate the expected value



    E

    [
    G
    ]


    {\displaystyle \operatorname {E} [G]}

    of the random variable



    G


    {\displaystyle G}

    that is the output of a stochastic simulation. Suppose this random variable cannot be simulated exactly, but there is a sequence of approximations




    G

    0


    ,

    G

    1


    ,

    ,

    G

    L




    {\displaystyle G_{0},G_{1},\ldots ,G_{L}}

    with increasing accuracy, but also increasing cost, that converges to



    G


    {\displaystyle G}

    as



    L




    {\displaystyle L\rightarrow \infty }

    . The basis of the multilevel method is the telescoping sum identity,

    that is trivially satisfied because of the linearity of the expectation operator. Each of the expectations



    E

    [

    G






    G



    1


    ]


    {\displaystyle \operatorname {E} [G_{\ell }-G_{\ell -1}]}

    is then approximated by a Monte Carlo method, resulting in the multilevel Monte Carlo method. Note that taking a sample of the difference




    G






    G



    1




    {\displaystyle G_{\ell }-G_{\ell -1}}

    at level






    {\displaystyle \ell }

    requires a simulation of both




    G






    {\displaystyle G_{\ell }}

    and




    G



    1




    {\displaystyle G_{\ell -1}}

    .
    The MLMC method works if the variances



    V

    [

    G






    G



    1


    ]

    0


    {\displaystyle \operatorname {V} [G_{\ell }-G_{\ell -1}]\rightarrow 0}

    as








    {\displaystyle \ell \rightarrow \infty }

    , which will be the case if both




    G






    {\displaystyle G_{\ell }}

    and




    G



    1




    {\displaystyle G_{\ell -1}}

    approximate the same random variable



    G


    {\displaystyle G}

    . By the Central Limit Theorem, this implies that one needs fewer and fewer samples to accurately approximate the expectation of the difference




    G






    G



    1




    {\displaystyle G_{\ell }-G_{\ell -1}}

    as








    {\displaystyle \ell \rightarrow \infty }

    . Hence, most samples will be taken on level



    0


    {\displaystyle 0}

    , where samples are cheap, and only very few samples will be required at the finest level



    L


    {\displaystyle L}

    . In this sense, MLMC can be considered as a recursive control variate strategy.


    Applications



    The first application of MLMC is attributed to Mike Giles, in the context of stochastic differential equations (SDEs) for option pricing, however, earlier traces are found in the work of Heinrich in the context of parametric integration. Here, the random variable



    G
    =
    f
    (
    X
    (
    T
    )
    )


    {\displaystyle G=f(X(T))}

    is known as the payoff function, and the sequence of approximations




    G






    {\displaystyle G_{\ell }}

    ,




    =
    0
    ,

    ,
    L


    {\displaystyle \ell =0,\ldots ,L}

    use an approximation to the sample path



    X
    (
    t
    )


    {\displaystyle X(t)}

    with time step




    h




    =

    2





    T


    {\displaystyle h_{\ell }=2^{-\ell }T}

    .
    The application of MLMC to problems in uncertainty quantification (UQ) is an active area of research. An important prototypical example of these problems are partial differential equations (PDEs) with random coefficients. In this context, the random variable



    G


    {\displaystyle G}

    is known as the quantity of interest, and the sequence of approximations corresponds to a discretization of the PDE with different mesh sizes.


    An algorithm for MLMC simulation


    A simple level-adaptive algorithm for MLMC simulation is given below in pseudo-code.




    L

    0


    {\displaystyle L\gets 0}


    repeat
    Take warm-up samples at level



    L


    {\displaystyle L}


    Compute the sample variance on all levels




    =
    0
    ,

    ,
    L


    {\displaystyle \ell =0,\ldots ,L}


    Define the optimal number of samples




    N






    {\displaystyle N_{\ell }}

    on all levels




    =
    0
    ,

    ,
    L


    {\displaystyle \ell =0,\ldots ,L}


    Take additional samples on each level






    {\displaystyle \ell }

    according to




    N






    {\displaystyle N_{\ell }}


    if



    L

    2


    {\displaystyle L\geq 2}

    then
    Test for convergence
    end
    if not converged then




    L

    L
    +
    1


    {\displaystyle L\gets L+1}


    end
    until converged


    Extensions of MLMC


    Recent extensions of the multilevel Monte Carlo method include multi-index Monte Carlo, where more than one direction of refinement is considered, and the combination of MLMC with the Quasi-Monte Carlo method.


    See also


    Monte Carlo method
    Monte Carlo methods in finance
    Quasi-Monte Carlo methods in finance
    Uncertainty quantification
    Partial differential equations with random coefficients


    References

Kata Kunci Pencarian: