- Source: Linear multistep method
Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge–Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used.
Definitions
Numerical methods for ordinary differential equations approximate solutions to initial value problems of the form
y
′
=
f
(
t
,
y
)
,
y
(
t
0
)
=
y
0
.
{\displaystyle y'=f(t,y),\quad y(t_{0})=y_{0}.}
The result is approximations for the value of
y
(
t
)
{\displaystyle y(t)}
at discrete times
t
i
{\displaystyle t_{i}}
:
y
i
≈
y
(
t
i
)
where
t
i
=
t
0
+
i
h
,
{\displaystyle y_{i}\approx y(t_{i})\quad {\text{where}}\quad t_{i}=t_{0}+ih,}
where
h
{\displaystyle h}
is the time step (sometimes referred to as
Δ
t
{\displaystyle \Delta t}
) and
i
{\displaystyle i}
is an integer.
Multistep methods use information from the previous
s
{\displaystyle s}
steps to calculate the next value. In particular, a linear multistep method uses a linear combination of
y
i
{\displaystyle y_{i}}
and
f
(
t
i
,
y
i
)
{\displaystyle f(t_{i},y_{i})}
to calculate the value of
y
{\displaystyle y}
for the desired current step. Thus, a linear multistep method is a method of the form
y
n
+
s
+
a
s
−
1
⋅
y
n
+
s
−
1
+
a
s
−
2
⋅
y
n
+
s
−
2
+
⋯
+
a
0
⋅
y
n
=
h
⋅
(
b
s
⋅
f
(
t
n
+
s
,
y
n
+
s
)
+
b
s
−
1
⋅
f
(
t
n
+
s
−
1
,
y
n
+
s
−
1
)
+
⋯
+
b
0
⋅
f
(
t
n
,
y
n
)
)
⇔
∑
j
=
0
s
a
j
y
n
+
j
=
h
∑
j
=
0
s
b
j
f
(
t
n
+
j
,
y
n
+
j
)
,
{\displaystyle {\begin{aligned}&y_{n+s}+a_{s-1}\cdot y_{n+s-1}+a_{s-2}\cdot y_{n+s-2}+\cdots +a_{0}\cdot y_{n}\\&\qquad {}=h\cdot \left(b_{s}\cdot f(t_{n+s},y_{n+s})+b_{s-1}\cdot f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}\cdot f(t_{n},y_{n})\right)\\&\Leftrightarrow \sum _{j=0}^{s}a_{j}y_{n+j}=h\sum _{j=0}^{s}b_{j}f(t_{n+j},y_{n+j}),\end{aligned}}}
with
a
s
=
1
{\displaystyle a_{s}=1}
. The coefficients
a
0
,
…
,
a
s
−
1
{\displaystyle a_{0},\dotsc ,a_{s-1}}
and
b
0
,
…
,
b
s
{\displaystyle b_{0},\dotsc ,b_{s}}
determine the method. The designer of the method chooses the coefficients, balancing the need to get a good approximation to the true solution against the desire to get a method that is easy to apply. Often, many coefficients are zero to simplify the method.
One can distinguish between explicit and implicit methods. If
b
s
=
0
{\displaystyle b_{s}=0}
, then the method is called "explicit", since the formula can directly compute
y
n
+
s
{\displaystyle y_{n+s}}
. If
b
s
≠
0
{\displaystyle b_{s}\neq 0}
then the method is called "implicit", since the value of
y
n
+
s
{\displaystyle y_{n+s}}
depends on the value of
f
(
t
n
+
s
,
y
n
+
s
)
{\displaystyle f(t_{n+s},y_{n+s})}
, and the equation must be solved for
y
n
+
s
{\displaystyle y_{n+s}}
. Iterative methods such as Newton's method are often used to solve the implicit formula.
Sometimes an explicit multistep method is used to "predict" the value of
y
n
+
s
{\displaystyle y_{n+s}}
. That value is then used in an implicit formula to "correct" the value. The result is a predictor–corrector method.
Examples
Consider for an example the problem
y
′
=
f
(
t
,
y
)
=
y
,
y
(
0
)
=
1.
{\displaystyle y'=f(t,y)=y,\quad y(0)=1.}
The exact solution is
y
(
t
)
=
e
t
{\displaystyle y(t)=e^{t}}
.
= One-step Euler
=A simple numerical method is Euler's method:
y
n
+
1
=
y
n
+
h
f
(
t
n
,
y
n
)
.
{\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).}
Euler's method can be viewed as an explicit multistep method for the degenerate case of one step.
This method, applied with step size
h
=
1
2
{\displaystyle h={\tfrac {1}{2}}}
on the problem
y
′
=
y
{\displaystyle y'=y}
, gives the following results:
y
1
=
y
0
+
h
f
(
t
0
,
y
0
)
=
1
+
1
2
⋅
1
=
1.5
,
y
2
=
y
1
+
h
f
(
t
1
,
y
1
)
=
1.5
+
1
2
⋅
1.5
=
2.25
,
y
3
=
y
2
+
h
f
(
t
2
,
y
2
)
=
2.25
+
1
2
⋅
2.25
=
3.375
,
y
4
=
y
3
+
h
f
(
t
3
,
y
3
)
=
3.375
+
1
2
⋅
3.375
=
5.0625.
{\displaystyle {\begin{aligned}y_{1}&=y_{0}+hf(t_{0},y_{0})=1+{\tfrac {1}{2}}\cdot 1=1.5,\\y_{2}&=y_{1}+hf(t_{1},y_{1})=1.5+{\tfrac {1}{2}}\cdot 1.5=2.25,\\y_{3}&=y_{2}+hf(t_{2},y_{2})=2.25+{\tfrac {1}{2}}\cdot 2.25=3.375,\\y_{4}&=y_{3}+hf(t_{3},y_{3})=3.375+{\tfrac {1}{2}}\cdot 3.375=5.0625.\end{aligned}}}
= Two-step Adams–Bashforth
=Euler's method is a one-step method. A simple multistep method is the two-step Adams–Bashforth method
y
n
+
2
=
y
n
+
1
+
3
2
h
f
(
t
n
+
1
,
y
n
+
1
)
−
1
2
h
f
(
t
n
,
y
n
)
.
{\displaystyle y_{n+2}=y_{n+1}+{\tfrac {3}{2}}hf(t_{n+1},y_{n+1})-{\tfrac {1}{2}}hf(t_{n},y_{n}).}
This method needs two values,
y
n
+
1
{\displaystyle y_{n+1}}
and
y
n
{\displaystyle y_{n}}
, to compute the next value,
y
n
+
2
{\displaystyle y_{n+2}}
. However, the initial value problem provides only one value,
y
0
=
1
{\displaystyle y_{0}=1}
. One possibility to resolve this issue is to use the
y
1
{\displaystyle y_{1}}
computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits):
y
2
=
y
1
+
3
2
h
f
(
t
1
,
y
1
)
−
1
2
h
f
(
t
0
,
y
0
)
=
1.5
+
3
2
⋅
1
2
⋅
1.5
−
1
2
⋅
1
2
⋅
1
=
2.375
,
y
3
=
y
2
+
3
2
h
f
(
t
2
,
y
2
)
−
1
2
h
f
(
t
1
,
y
1
)
=
2.375
+
3
2
⋅
1
2
⋅
2.375
−
1
2
⋅
1
2
⋅
1.5
=
3.7812
,
y
4
=
y
3
+
3
2
h
f
(
t
3
,
y
3
)
−
1
2
h
f
(
t
2
,
y
2
)
=
3.7812
+
3
2
⋅
1
2
⋅
3.7812
−
1
2
⋅
1
2
⋅
2.375
=
6.0234.
{\displaystyle {\begin{aligned}y_{2}&=y_{1}+{\tfrac {3}{2}}hf(t_{1},y_{1})-{\tfrac {1}{2}}hf(t_{0},y_{0})=1.5+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1=2.375,\\y_{3}&=y_{2}+{\tfrac {3}{2}}hf(t_{2},y_{2})-{\tfrac {1}{2}}hf(t_{1},y_{1})=2.375+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5=3.7812,\\y_{4}&=y_{3}+{\tfrac {3}{2}}hf(t_{3},y_{3})-{\tfrac {1}{2}}hf(t_{2},y_{2})=3.7812+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 3.7812-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375=6.0234.\end{aligned}}}
The exact solution at
t
=
t
4
=
2
{\displaystyle t=t_{4}=2}
is
e
2
=
7.3891
…
{\displaystyle e^{2}=7.3891\ldots }
, so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough.
Families of multistep methods
Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the backward differentiation formulas (BDFs).
= Adams–Bashforth methods
=The Adams–Bashforth methods are explicit methods. The coefficients are
a
s
−
1
=
−
1
{\displaystyle a_{s-1}=-1}
and
a
s
−
2
=
⋯
=
a
0
=
0
{\displaystyle a_{s-2}=\cdots =a_{0}=0}
, while the
b
j
{\displaystyle b_{j}}
are chosen such that the methods have order s (this determines the methods uniquely).
The Adams–Bashforth methods with s = 1, 2, 3, 4, 5 are (Hairer, Nørsett & Wanner 1993, §III.1; Butcher 2003, p. 103):
y
n
+
1
=
y
n
+
h
f
(
t
n
,
y
n
)
,
(This is the Euler method)
y
n
+
2
=
y
n
+
1
+
h
(
3
2
f
(
t
n
+
1
,
y
n
+
1
)
−
1
2
f
(
t
n
,
y
n
)
)
,
y
n
+
3
=
y
n
+
2
+
h
(
23
12
f
(
t
n
+
2
,
y
n
+
2
)
−
16
12
f
(
t
n
+
1
,
y
n
+
1
)
+
5
12
f
(
t
n
,
y
n
)
)
,
y
n
+
4
=
y
n
+
3
+
h
(
55
24
f
(
t
n
+
3
,
y
n
+
3
)
−
59
24
f
(
t
n
+
2
,
y
n
+
2
)
+
37
24
f
(
t
n
+
1
,
y
n
+
1
)
−
9
24
f
(
t
n
,
y
n
)
)
,
y
n
+
5
=
y
n
+
4
+
h
(
1901
720
f
(
t
n
+
4
,
y
n
+
4
)
−
2774
720
f
(
t
n
+
3
,
y
n
+
3
)
+
2616
720
f
(
t
n
+
2
,
y
n
+
2
)
−
1274
720
f
(
t
n
+
1
,
y
n
+
1
)
+
251
720
f
(
t
n
,
y
n
)
)
.
{\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+hf(t_{n},y_{n}),\qquad {\text{(This is the Euler method)}}\\y_{n+2}&=y_{n+1}+h\left({\frac {3}{2}}f(t_{n+1},y_{n+1})-{\frac {1}{2}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {23}{12}}f(t_{n+2},y_{n+2})-{\frac {16}{12}}f(t_{n+1},y_{n+1})+{\frac {5}{12}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {55}{24}}f(t_{n+3},y_{n+3})-{\frac {59}{24}}f(t_{n+2},y_{n+2})+{\frac {37}{24}}f(t_{n+1},y_{n+1})-{\frac {9}{24}}f(t_{n},y_{n})\right),\\y_{n+5}&=y_{n+4}+h\left({\frac {1901}{720}}f(t_{n+4},y_{n+4})-{\frac {2774}{720}}f(t_{n+3},y_{n+3})+{\frac {2616}{720}}f(t_{n+2},y_{n+2})-{\frac {1274}{720}}f(t_{n+1},y_{n+1})+{\frac {251}{720}}f(t_{n},y_{n})\right).\end{aligned}}}
The coefficients
b
j
{\displaystyle b_{j}}
can be determined as follows. Use polynomial interpolation to find the polynomial p of degree
s
−
1
{\displaystyle s-1}
such that
p
(
t
n
+
i
)
=
f
(
t
n
+
i
,
y
n
+
i
)
,
for
i
=
0
,
…
,
s
−
1.
{\displaystyle p(t_{n+i})=f(t_{n+i},y_{n+i}),\qquad {\text{for }}i=0,\ldots ,s-1.}
The Lagrange formula for polynomial interpolation yields
p
(
t
)
=
∑
j
=
0
s
−
1
(
−
1
)
s
−
j
−
1
f
(
t
n
+
j
,
y
n
+
j
)
j
!
(
s
−
j
−
1
)
!
h
s
−
1
∏
i
=
0
i
≠
j
s
−
1
(
t
−
t
n
+
i
)
.
{\displaystyle p(t)=\sum _{j=0}^{s-1}{\frac {(-1)^{s-j-1}f(t_{n+j},y_{n+j})}{j!(s-j-1)!h^{s-1}}}\prod _{i=0 \atop i\neq j}^{s-1}(t-t_{n+i}).}
The polynomial p is locally a good approximation of the right-hand side of the differential equation
y
′
=
f
(
t
,
y
)
{\displaystyle y'=f(t,y)}
that is to be solved, so consider the equation
y
′
=
p
(
t
)
{\displaystyle y'=p(t)}
instead. This equation can be solved exactly; the solution is simply the integral of p. This suggests taking
y
n
+
s
=
y
n
+
s
−
1
+
∫
t
n
+
s
−
1
t
n
+
s
p
(
t
)
d
t
.
{\displaystyle y_{n+s}=y_{n+s-1}+\int _{t_{n+s-1}}^{t_{n+s}}p(t)\,\mathrm {d} t.}
The Adams–Bashforth method arises when the formula for p is substituted. The coefficients
b
j
{\displaystyle b_{j}}
turn out to be given by
b
s
−
j
−
1
=
(
−
1
)
j
j
!
(
s
−
j
−
1
)
!
∫
0
1
∏
i
=
0
i
≠
j
s
−
1
(
u
+
i
)
d
u
,
for
j
=
0
,
…
,
s
−
1.
{\displaystyle b_{s-j-1}={\frac {(-1)^{j}}{j!(s-j-1)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s-1}(u+i)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s-1.}
Replacing
f
(
t
,
y
)
{\displaystyle f(t,y)}
by its interpolant p incurs an error of order hs, and it follows that the s-step Adams–Bashforth method has indeed order s (Iserles 1996, §2.1)
The Adams–Bashforth methods were designed by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth. Bashforth (1883) published his theory and Adams' numerical method (Goldstine 1977).
= Adams–Moulton methods
=The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have
a
s
−
1
=
−
1
{\displaystyle a_{s-1}=-1}
and
a
s
−
2
=
⋯
=
a
0
=
0
{\displaystyle a_{s-2}=\cdots =a_{0}=0}
. Again the b coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that
b
s
=
0
{\displaystyle b_{s}=0}
, an s-step Adams–Moulton method can reach order
s
+
1
{\displaystyle s+1}
, while an s-step Adams–Bashforth methods has only order s.
The Adams–Moulton methods with s = 0, 1, 2, 3, 4 are (Hairer, Nørsett & Wanner 1993, §III.1; Quarteroni, Sacco & Saleri 2000) listed, where the first two methods are the backward Euler method and the trapezoidal rule respectively:
y
n
=
y
n
−
1
+
h
f
(
t
n
,
y
n
)
,
y
n
+
1
=
y
n
+
1
2
h
(
f
(
t
n
+
1
,
y
n
+
1
)
+
f
(
t
n
,
y
n
)
)
,
y
n
+
2
=
y
n
+
1
+
h
(
5
12
f
(
t
n
+
2
,
y
n
+
2
)
+
8
12
f
(
t
n
+
1
,
y
n
+
1
)
−
1
12
f
(
t
n
,
y
n
)
)
,
y
n
+
3
=
y
n
+
2
+
h
(
9
24
f
(
t
n
+
3
,
y
n
+
3
)
+
19
24
f
(
t
n
+
2
,
y
n
+
2
)
−
5
24
f
(
t
n
+
1
,
y
n
+
1
)
+
1
24
f
(
t
n
,
y
n
)
)
,
y
n
+
4
=
y
n
+
3
+
h
(
251
720
f
(
t
n
+
4
,
y
n
+
4
)
+
646
720
f
(
t
n
+
3
,
y
n
+
3
)
−
264
720
f
(
t
n
+
2
,
y
n
+
2
)
+
106
720
f
(
t
n
+
1
,
y
n
+
1
)
−
19
720
f
(
t
n
,
y
n
)
)
.
{\displaystyle {\begin{aligned}y_{n}&=y_{n-1}+hf(t_{n},y_{n}),\\y_{n+1}&=y_{n}+{\frac {1}{2}}h\left(f(t_{n+1},y_{n+1})+f(t_{n},y_{n})\right),\\y_{n+2}&=y_{n+1}+h\left({\frac {5}{12}}f(t_{n+2},y_{n+2})+{\frac {8}{12}}f(t_{n+1},y_{n+1})-{\frac {1}{12}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {9}{24}}f(t_{n+3},y_{n+3})+{\frac {19}{24}}f(t_{n+2},y_{n+2})-{\frac {5}{24}}f(t_{n+1},y_{n+1})+{\frac {1}{24}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {251}{720}}f(t_{n+4},y_{n+4})+{\frac {646}{720}}f(t_{n+3},y_{n+3})-{\frac {264}{720}}f(t_{n+2},y_{n+2})+{\frac {106}{720}}f(t_{n+1},y_{n+1})-{\frac {19}{720}}f(t_{n},y_{n})\right).\end{aligned}}}
The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points
t
n
−
1
,
…
,
t
n
−
s
{\displaystyle t_{n-1},\dots ,t_{n-s}}
, as above, but also
t
n
{\displaystyle t_{n}}
. The coefficients are given by
b
s
−
j
=
(
−
1
)
j
j
!
(
s
−
j
)
!
∫
0
1
∏
i
=
0
i
≠
j
s
(
u
+
i
−
1
)
d
u
,
for
j
=
0
,
…
,
s
.
{\displaystyle b_{s-j}={\frac {(-1)^{j}}{j!(s-j)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s}(u+i-1)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s.}
The Adams–Moulton methods are solely due to John Couch Adams, like the Adams–Bashforth methods. The name of Forest Ray Moulton became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a predictor-corrector pair (Moulton 1926); Milne (1926) had the same idea. Adams used Newton's method to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1).
= Backward differentiation formulas (BDF)
=The BDF methods are implicit methods with
b
s
−
1
=
⋯
=
b
0
=
0
{\displaystyle b_{s-1}=\cdots =b_{0}=0}
and the other coefficients chosen such that the method attains order s (the maximum possible). These methods are especially used for the solution of stiff differential equations.
Analysis
The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are convergence, order, and stability.
= Consistency and order
=The first question is whether the method is consistent: is the difference equation
a
s
y
n
+
s
+
a
s
−
1
y
n
+
s
−
1
+
a
s
−
2
y
n
+
s
−
2
+
⋯
+
a
0
y
n
=
h
(
b
s
f
(
t
n
+
s
,
y
n
+
s
)
+
b
s
−
1
f
(
t
n
+
s
−
1
,
y
n
+
s
−
1
)
+
⋯
+
b
0
f
(
t
n
,
y
n
)
)
,
{\displaystyle {\begin{aligned}&a_{s}y_{n+s}+a_{s-1}y_{n+s-1}+a_{s-2}y_{n+s-2}+\cdots +a_{0}y_{n}\\&\qquad {}=h{\bigl (}b_{s}f(t_{n+s},y_{n+s})+b_{s-1}f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}f(t_{n},y_{n}){\bigr )},\end{aligned}}}
a good approximation of the differential equation
y
′
=
f
(
t
,
y
)
{\displaystyle y'=f(t,y)}
? More precisely, a multistep method is consistent if the local truncation error goes to zero faster than the step size h as h goes to zero, where the local truncation error is defined to be the difference between the result
y
n
+
s
{\displaystyle y_{n+s}}
of the method, assuming that all the previous values
y
n
+
s
−
1
,
…
,
y
n
{\displaystyle y_{n+s-1},\ldots ,y_{n}}
are exact, and the exact solution of the equation at time
t
n
+
s
{\displaystyle t_{n+s}}
. A computation using Taylor series shows that a linear multistep method is consistent if and only if
∑
k
=
0
s
−
1
a
k
=
−
1
and
∑
k
=
0
s
b
k
=
s
+
∑
k
=
0
s
−
1
k
a
k
.
{\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad \sum _{k=0}^{s}b_{k}=s+\sum _{k=0}^{s-1}ka_{k}.}
All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2).
If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have order p if the local error is of order
O
(
h
p
+
1
)
{\displaystyle O(h^{p+1})}
as h goes to zero. This is equivalent to the following condition on the coefficients of the methods:
∑
k
=
0
s
−
1
a
k
=
−
1
and
q
∑
k
=
0
s
k
q
−
1
b
k
=
s
q
+
∑
k
=
0
s
−
1
k
q
a
k
for
q
=
1
,
…
,
p
.
{\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad q\sum _{k=0}^{s}k^{q-1}b_{k}=s^{q}+\sum _{k=0}^{s-1}k^{q}a_{k}{\text{ for }}q=1,\ldots ,p.}
The s-step Adams–Bashforth method has order s, while the s-step Adams–Moulton method has order
s
+
1
{\displaystyle s+1}
(Hairer, Nørsett & Wanner 1993, §III.2).
These conditions are often formulated using the characteristic polynomials
ρ
(
z
)
=
z
s
+
∑
k
=
0
s
−
1
a
k
z
k
and
σ
(
z
)
=
∑
k
=
0
s
b
k
z
k
.
{\displaystyle \rho (z)=z^{s}+\sum _{k=0}^{s-1}a_{k}z^{k}\quad {\text{and}}\quad \sigma (z)=\sum _{k=0}^{s}b_{k}z^{k}.}
In terms of these polynomials, the above condition for the method to have order p becomes
ρ
(
e
h
)
−
h
σ
(
e
h
)
=
O
(
h
p
+
1
)
as
h
→
0.
{\displaystyle \rho (e^{h})-h\sigma (e^{h})=O(h^{p+1})\quad {\text{as }}h\to 0.}
In particular, the method is consistent if it has order at least one, which is the case if
ρ
(
1
)
=
0
{\displaystyle \rho (1)=0}
and
ρ
′
(
1
)
=
σ
(
1
)
{\displaystyle \rho '(1)=\sigma (1)}
.
= Stability and convergence
=The numerical solution of a one-step method depends on the initial condition
y
0
{\displaystyle y_{0}}
, but the numerical solution of an s-step method depend on all the s starting values,
y
0
,
y
1
,
…
,
y
s
−
1
{\displaystyle y_{0},y_{1},\ldots ,y_{s-1}}
. It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is zero-stable for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than Kε for some value of K which does not depend on the step size h. This is called "zero-stability" because it is enough to check the condition for the differential equation
y
′
=
0
{\displaystyle y'=0}
(Süli & Mayers 2003, p. 332).
If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied (Süli & Mayers 2003, p. 335).
Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values
y
1
,
…
,
y
s
−
1
{\displaystyle y_{1},\ldots ,y_{s-1}}
all converge to the initial value
y
0
{\displaystyle y_{0}}
as
h
→
0
{\displaystyle h\to 0}
. Then, the numerical solution converges to the exact solution as
h
→
0
{\displaystyle h\to 0}
if and only if the method is zero-stable. This result is known as the Dahlquist equivalence theorem, named after Germund Dahlquist; this theorem is similar in spirit to the Lax equivalence theorem for finite difference methods. Furthermore, if the method has order p, then the global error (the difference between the numerical solution and the exact solution at a fixed time) is
O
(
h
p
)
{\displaystyle O(h^{p})}
(Süli & Mayers 2003, p. 340).
Furthermore, if the method is convergent, the method is said to be strongly stable if
z
=
1
{\displaystyle z=1}
is the only root of modulus 1. If it is convergent and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be relatively stable. Note that 1 must be a root for the method to be convergent; thus convergent methods are always one of these two.
To assess the performance of linear multistep methods on stiff equations, consider the linear test equation y' = λy. A multistep method applied to this differential equation with step size h yields a linear recurrence relation with characteristic polynomial
π
(
z
;
h
λ
)
=
(
1
−
h
λ
β
s
)
z
s
+
∑
k
=
0
s
−
1
(
α
k
−
h
λ
β
k
)
z
k
=
ρ
(
z
)
−
h
λ
σ
(
z
)
.
{\displaystyle \pi (z;h\lambda )=(1-h\lambda \beta _{s})z^{s}+\sum _{k=0}^{s-1}(\alpha _{k}-h\lambda \beta _{k})z^{k}=\rho (z)-h\lambda \sigma (z).}
This polynomial is called the stability polynomial of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be absolutely stable for that value of hλ. The method is said to be A-stable if it is absolutely stable for all hλ with negative real part. The region of absolute stability is the set of all hλ for which the multistep method is absolutely stable (Süli & Mayers 2003, pp. 347 & 348). For more details, see the section on stiff equations and multistep methods.
= Example
=Consider the Adams–Bashforth three-step method
y
n
+
3
=
y
n
+
2
+
h
(
23
12
f
(
t
n
+
2
,
y
n
+
2
)
−
4
3
f
(
t
n
+
1
,
y
n
+
1
)
+
5
12
f
(
t
n
,
y
n
)
)
.
{\displaystyle y_{n+3}=y_{n+2}+h\left({23 \over 12}f(t_{n+2},y_{n+2})-{4 \over 3}f(t_{n+1},y_{n+1})+{5 \over 12}f(t_{n},y_{n})\right).}
One characteristic polynomial is thus
ρ
(
z
)
=
z
3
−
z
2
=
z
2
(
z
−
1
)
{\displaystyle \rho (z)=z^{3}-z^{2}=z^{2}(z-1)}
which has roots
z
=
0
,
1
{\displaystyle z=0,1}
, and the conditions above are satisfied. As
z
=
1
{\displaystyle z=1}
is the only root of modulus 1, the method is strongly stable.
The other characteristic polynomial is
σ
(
z
)
=
23
12
z
2
−
4
3
z
+
5
12
{\displaystyle \sigma (z)={\frac {23}{12}}z^{2}-{\frac {4}{3}}z+{\frac {5}{12}}}
First and second Dahlquist barriers
These two results were proved by Germund Dahlquist and represent an important bound for the order of convergence and for the A-stability of a linear multistep method. The first Dahlquist barrier was proved in Dahlquist (1956) and the second in Dahlquist (1963).
= First Dahlquist barrier
=The first Dahlquist barrier states that a zero-stable and linear q-step multistep method cannot attain an order of convergence greater than q + 1 if q is odd and greater than q + 2 if q is even. If the method is also explicit, then it cannot attain an order greater than q (Hairer, Nørsett & Wanner 1993, Thm III.3.5).
= Second Dahlquist barrier
=The second Dahlquist barrier states that no explicit linear multistep methods are A-stable. Further, the maximal order of an (implicit) A-stable linear multistep method is 2. Among the A-stable linear multistep methods of order 2, the trapezoidal rule has the smallest error constant (Dahlquist 1963, Thm 2.1 and 2.2).
See also
Digital energy gain
References
Bashforth, Francis (1883), An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams, Cambridge{{citation}}: CS1 maint: location missing publisher (link).
Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, John Wiley, ISBN 978-0-471-96758-3.
Dahlquist, Germund (1956), "Convergence and stability in the numerical integration of ordinary differential equations", Mathematica Scandinavica, 4: 33–53, doi:10.7146/math.scand.a-10454.
Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3: 27–43, doi:10.1007/BF01963532, ISSN 0006-3835, S2CID 120241743.
Goldstine, Herman H. (1977), A History of Numerical Analysis from the 16th through the 19th Century, New York: Springer-Verlag, ISBN 978-0-387-90277-7.
Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems (2nd ed.), Berlin: Springer Verlag, ISBN 978-3-540-56670-0.
Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5.
Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2.
Milne, W. E. (1926), "Numerical integration of ordinary differential equations", American Mathematical Monthly, 33 (9), Mathematical Association of America: 455–460, doi:10.2307/2299609, JSTOR 2299609.
Moulton, Forest R. (1926), New methods in exterior ballistics, University of Chicago Press.
Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000), Matematica Numerica, Springer Verlag, ISBN 978-88-470-0077-3.
Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1.
External links
Weisstein, Eric W. "Adams Method". MathWorld.
Kata Kunci Pencarian:
- Determinan
- Linear multistep method
- Euler method
- Runge–Kutta methods
- Trapezoidal rule (differential equations)
- Numerov's method
- Truncation error (numerical integration)
- Numerical methods for ordinary differential equations
- Stiff equation
- General linear methods
- Backward Euler method