- Source: Method of averaging
In mathematics, more specifically in dynamical systems, the method of averaging (also called averaging theory) exploits systems containing time-scales separation: a fast oscillation versus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.
More precisely, the system has the following form
x
˙
=
ε
f
(
x
,
t
,
ε
)
,
0
≤
ε
≪
1
{\displaystyle {\dot {x}}=\varepsilon f(x,t,\varepsilon ),\quad 0\leq \varepsilon \ll 1}
of a phase space variable
x
.
{\displaystyle x.}
The fast oscillation is given by
f
{\displaystyle f}
versus a slow drift of
x
˙
{\displaystyle {\dot {x}}}
. The averaging method yields an autonomous dynamical system
y
˙
=
ε
1
T
∫
0
T
f
(
y
,
s
,
0
)
d
s
=:
ε
f
¯
(
y
)
{\displaystyle {\dot {y}}=\varepsilon {\frac {1}{T}}\int _{0}^{T}f(y,s,0)~ds=:\varepsilon {\bar {f}}(y)}
which approximates the solution curves of
x
˙
{\displaystyle {\dot {x}}}
inside a connected and compact region of the phase space and over time of
1
/
ε
{\displaystyle 1/\varepsilon }
.
Under the validity of this averaging technique, the asymptotic behavior of the original system is captured by the dynamical equation for
y
{\displaystyle y}
. In this way, qualitative methods for autonomous dynamical systems may be employed to analyze the equilibria and more complex structures, such as slow manifold and invariant manifolds, as well as their stability in the phase space of the averaged system.
In addition, in a physical application it might be reasonable or natural to replace a mathematical model, which is given in the form of the differential equation for
x
˙
{\displaystyle {\dot {x}}}
, with the corresponding averaged system
y
˙
{\displaystyle {\dot {y}}}
, in order to use the averaged system to make a prediction and then test the prediction against the results of a physical experiment.
The averaging method has a long history, which is deeply rooted in perturbation problems that arose in celestial mechanics (see, for example in ).
First example
Consider a perturbed logistic growth
x
˙
=
ε
(
x
(
1
−
x
)
+
sin
t
)
x
∈
R
,
0
≤
ε
≪
1
,
{\displaystyle {\dot {x}}=\varepsilon (x(1-x)+\sin {t})\quad \quad x\in \mathbb {R} ,\quad 0\leq \varepsilon \ll 1,}
and the averaged equation
y
˙
=
ε
y
(
1
−
y
)
y
∈
R
.
{\displaystyle {\dot {y}}=\varepsilon y(1-y)\qquad y\in \mathbb {R} .}
The purpose of the method of averaging is to tell us the qualitative behavior of the vector field when we average it over a period of time. It guarantees that the solution
y
(
t
)
{\displaystyle y(t)}
approximates
x
(
t
)
{\displaystyle x(t)}
for times
t
=
O
(
1
/
ε
)
.
{\displaystyle t={\mathcal {O}}(1/\varepsilon ).}
Exceptionally: in this example the approximation is even better, it is valid for all times. We present it in a section below.
Definitions
We assume the vector field
f
:
R
n
×
R
×
R
→
R
n
{\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} \times \mathbb {R} \to \mathbb {R} ^{n}}
to be of differentiability class
C
r
{\displaystyle C^{r}}
with
r
≥
2
{\displaystyle r\geq 2}
(or even we will only say smooth), which we will denote
f
∈
C
r
(
R
n
×
R
×
R
+
;
R
n
)
{\displaystyle f\in C^{r}(\mathbb {R} ^{n}\times \mathbb {R} \times \mathbb {R} ^{+};\mathbb {R} ^{n})}
. We expand this time-dependent vector field in a Taylor series (in powers of
ε
{\displaystyle \varepsilon }
) with remainder
f
[
k
+
1
]
(
x
,
t
,
ε
)
{\displaystyle f^{[k+1]}(x,t,\varepsilon )}
. We introduce the following notation:
f
(
x
,
t
,
ε
)
=
f
0
(
x
,
t
)
+
ε
f
1
(
x
,
t
)
+
⋯
+
ε
k
f
k
(
x
,
t
)
+
ε
k
+
1
f
[
k
+
1
]
(
x
,
t
,
ε
)
,
{\displaystyle f(x,t,\varepsilon )=f^{0}(x,t)+\varepsilon f^{1}(x,t)+\dots +\varepsilon ^{k}f^{k}(x,t)+\varepsilon ^{k+1}f^{[k+1]}(x,t,\varepsilon ),}
where
f
j
=
f
(
j
)
(
x
,
t
,
0
)
j
!
{\displaystyle f^{j}={\frac {f^{(j)}(x,t,0)}{j!}}}
is the
j
{\displaystyle j}
-th derivative with
0
≤
j
≤
k
{\displaystyle 0\leq j\leq k}
. As we are concerned with averaging problems, in general
f
0
(
x
,
t
)
{\displaystyle f^{0}(x,t)}
is zero, so it turns out that we will be interested in vector fields given by
f
(
x
,
t
,
ε
)
=
ε
f
[
1
]
(
x
,
t
,
ε
)
=
ε
f
1
(
x
,
t
)
+
ε
2
f
[
2
]
(
x
,
t
,
ε
)
.
{\displaystyle f(x,t,\varepsilon )=\varepsilon f^{[1]}(x,t,\varepsilon )=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon ).}
Besides, we define the following initial value problem to be in the standard form:
x
˙
=
ε
f
1
(
x
,
t
)
+
ε
2
f
[
2
]
(
x
,
t
,
ε
)
,
x
(
0
,
ε
)
=:
x
0
∈
D
⊆
R
n
,
0
≤
ε
≪
1.
{\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon ),\qquad x(0,\varepsilon )=:x_{0}\in D\subseteq \mathbb {R} ^{n},\quad 0\leq \varepsilon \ll 1.}
Theorem: averaging in the periodic case
Consider for every
D
⊂
R
n
{\displaystyle D\subset \mathbb {R} ^{n}}
connected and bounded and every
ε
0
>
0
{\displaystyle \varepsilon _{0}>0}
there exist
L
>
0
{\displaystyle L>0}
and
ε
≤
ε
0
{\displaystyle \varepsilon \leq \varepsilon _{0}}
such that the original system (a non-autonomous dynamical system) given by
x
˙
=
ε
f
1
(
x
,
t
)
+
ε
2
f
[
2
]
(
x
,
t
,
ε
)
,
x
0
∈
D
⊆
R
n
,
0
≤
ε
≪
1
,
{\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon ),\qquad x_{0}\in D\subseteq \mathbb {R} ^{n},\quad 0\leq \varepsilon \ll 1,}
has solution
x
(
t
,
ε
)
{\displaystyle x(t,\varepsilon )}
, where
f
1
∈
C
r
(
D
×
R
;
R
n
)
{\displaystyle f^{1}\in C^{r}(D\times \mathbb {R} ;\mathbb {R} ^{n})}
is periodic with period
T
{\displaystyle T}
and
f
[
2
]
∈
C
r
(
D
×
R
×
R
+
;
R
n
)
{\displaystyle f^{[2]}\in C^{r}(D\times \mathbb {R} \times \mathbb {R} ^{+};\mathbb {R} ^{n})}
both with
r
≥
2
{\displaystyle r\geq 2}
bounded on bounded sets. Then there exists a constant
c
>
0
{\displaystyle c>0}
such that the solution
y
(
t
,
ε
)
{\displaystyle y(t,\varepsilon )}
of the averaged system (autonomous dynamical system) is
y
˙
=
ε
1
T
∫
0
T
f
1
(
y
,
s
)
d
s
=:
ε
f
¯
1
(
y
)
,
y
(
0
,
ε
)
=
x
0
{\displaystyle {\dot {y}}=\varepsilon {\frac {1}{T}}\int _{0}^{T}f^{1}(y,s)~ds=:\varepsilon {\bar {f}}^{1}(y),\quad y(0,\varepsilon )=x_{0}}
is
‖
x
(
t
,
ε
)
−
y
(
t
,
ε
)
‖
<
c
ε
{\displaystyle \|x(t,\varepsilon )-y(t,\varepsilon )\|
for
0
≤
ε
≤
ε
0
{\displaystyle 0\leq \varepsilon \leq \varepsilon _{0}}
and
0
≤
t
≤
L
/
ε
{\displaystyle 0\leq t\leq L/\varepsilon }
.
= Remarks
=There are two approximations in this what is called first approximation estimate: reduction to the average of the vector field and negligence of
O
(
ε
2
)
{\displaystyle {\mathcal {O}}(\varepsilon ^{2})}
terms.
Uniformity with respect to the initial condition
x
0
{\displaystyle x_{0}}
: if we vary
x
0
{\displaystyle x_{0}}
this affects the estimation of
L
{\displaystyle L}
and
c
{\displaystyle c}
. The proof and discussion of this can be found in J. Murdock's book.
Reduction of regularity: there is a more general form of this theorem which requires only
f
1
{\displaystyle f^{1}}
to be Lipschitz and
f
[
2
]
{\displaystyle f^{[2]}}
continuous. It is a more recent proof and can be seen in Sanders et al.. The theorem statement presented here is due to the proof framework proposed by Krylov-Bogoliubov which is based on an introduction of a near-identity transformation. The advantage of this method is the extension to more general settings such as infinite-dimensional systems - partial differential equation or delay differential equations.
J. Hale presents generalizations to almost periodic vector-fields.
= Strategy of the proof
=Krylov-Bogoliubov realized that the slow dynamics of the system determines the leading order of the asymptotic solution.
In order to proof it, they proposed a near-identity transformation, which turned out to be a change of coordinates with its own time-scale transforming the original system to the averaged one.
Sketch of the proof
Determination of a near-identity transformation: the smooth mapping
y
↦
U
(
y
,
t
,
ε
)
=
y
+
ε
u
[
1
]
(
y
,
t
,
ε
)
{\displaystyle y\mapsto U(y,t,\varepsilon )=y+\varepsilon u^{[1]}(y,t,\varepsilon )}
where
u
[
1
]
{\displaystyle u^{[1]}}
is assumed to be regular enough and
T
{\displaystyle T}
periodic. The proposed change of coordinates is given by
x
=
U
(
y
,
t
,
ε
)
{\displaystyle x=U(y,t,\varepsilon )}
.
Choose an appropriate
u
[
1
]
{\displaystyle u^{[1]}}
solving the homological equation of the averaging theory:
∂
u
[
1
]
∂
t
=
f
1
(
y
,
t
)
−
f
¯
1
(
y
)
{\displaystyle {\frac {\partial u^{[1]}}{\partial t}}=f^{1}(y,t)-{\bar {f}}^{1}(y)}
.
Change of coordinates carries the original system to
y
˙
=
ε
f
¯
1
(
y
)
+
ε
2
f
∗
[
2
]
(
y
,
t
,
ε
)
.
{\displaystyle {\dot {y}}=\varepsilon {\bar {f}}^{1}(y)+\varepsilon ^{2}f_{*}^{[2]}(y,t,\varepsilon ).}
Estimation of error due to truncation and comparison to the original variable.
Non-autonomous class of systems: more examples
Along the history of the averaging technique, there is class of system extensively studied which give us meaningful examples we will discuss below. The class of system is given by:
z
¨
+
z
=
ε
g
(
z
,
z
˙
,
t
)
,
z
∈
R
,
z
(
0
)
=
z
0
a
n
d
z
˙
(
0
)
=
v
0
,
{\displaystyle {\ddot {z}}+z=\varepsilon g(z,{\dot {z}},t),\qquad z\in \mathbb {R} ,\quad z(0)=z_{0}~\mathrm {and} ~{\dot {z}}(0)=v_{0},}
where
g
{\displaystyle g}
is smooth. This system is similar to a linear system with a small nonlinear perturbation given by
[
0
ε
g
(
z
,
z
˙
,
t
)
]
{\displaystyle {\begin{bmatrix}0\\\varepsilon ~g(z,{\dot {z}},t)\end{bmatrix}}}
:
z
1
˙
=
z
2
,
z
1
(
0
)
=
z
0
z
2
˙
=
−
z
1
+
ε
g
(
z
1
,
z
2
,
t
)
,
z
2
(
0
)
=
v
0
,
{\displaystyle {\begin{aligned}{\dot {z_{1}}}&=z_{2},&z_{1}(0)&=z_{0}\\{\dot {z_{2}}}&=-z_{1}+\varepsilon g(z_{1},z_{2},t),&z_{2}(0)&=v_{0},\end{aligned}}}
differing from the standard form. Hence there is a necessity to perform a transformation to make it in the standard form explicitly. We are able to change coordinates using variation of constants method. We look at the unperturbed system, i.e.
ε
=
0
{\displaystyle \varepsilon =0}
, given by
[
z
1
˙
z
2
˙
]
=
[
0
1
−
1
0
]
[
z
1
z
2
]
=
A
[
z
1
z
2
]
{\displaystyle {\begin{bmatrix}{\dot {z_{1}}}\\{\dot {z_{2}}}\end{bmatrix}}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}{\begin{bmatrix}z_{1}\\z_{2}\end{bmatrix}}=A{\begin{bmatrix}z_{1}\\z_{2}\end{bmatrix}}}
which has the fundamental solution
Φ
(
t
)
=
e
A
t
{\displaystyle \Phi (t)=e^{At}}
corresponding to a rotation. Then the time-dependent change of coordinates is
z
(
t
)
=
Φ
(
t
)
x
{\displaystyle z(t)=\Phi (t)x}
where
x
{\displaystyle x}
is the coordinates respective to the standard form.
If we take the time derivative in both sides and invert the fundamental matrix we obtain
x
˙
=
ε
e
−
A
t
[
0
g
~
(
x
,
x
˙
,
t
)
]
with
g
~
(
x
,
x
˙
,
t
)
=
g
(
cos
(
t
)
x
(
t
)
+
sin
(
t
)
x
˙
(
t
)
,
−
sin
(
t
)
x
(
t
)
+
cos
(
t
)
x
˙
(
t
)
,
t
)
.
{\displaystyle {\dot {x}}=\varepsilon e^{-At}{\begin{bmatrix}0\\~{\tilde {g}}(x,{\dot {x}},t)\end{bmatrix}}~{\text{ with }}~{\tilde {g}}(x,{\dot {x}},t)=g(\cos(t)x(t)+\sin(t){\dot {x}}(t),-\sin(t)x(t)+\cos(t){\dot {x}}(t),t).}
= Remarks
=The same can be done to time-dependent linear parts. Although the fundamental solution may be non-trivial to write down explicitly, the procedure is similar. See Sanders et al. for further details.
If the eigenvalues of
A
{\displaystyle A}
are not all purely imaginary this is called hyperbolicity condition. For this occasion, the perturbation equation may present some serious problems even whether
g
{\displaystyle g}
is bounded, since the solution grows exponentially fast. However, qualitatively, we may be able to know the asymptotic solution, such as Hartman-Grobman results and more.
Occasionally, polar coordinates may yield standard forms that are simpler to analyze. Consider
z
1
=
r
sin
(
t
−
ϕ
)
and
z
2
=
r
cos
(
t
−
ϕ
)
{\displaystyle z_{1}=r\sin(t-\phi )~{\text{and}}~z_{2}=r\cos(t-\phi )}
, which determines the initial condition
(
r
(
0
)
,
ϕ
(
0
)
)
{\displaystyle (r(0),\phi (0))}
and the system
[
r
˙
ϕ
˙
]
=
ε
[
cos
(
t
−
ϕ
)
g
(
r
sin
(
t
−
ϕ
)
,
r
cos
(
t
−
ϕ
)
,
t
)
1
r
sin
(
t
−
ϕ
)
g
(
r
sin
(
t
−
ϕ
)
,
r
cos
(
t
−
ϕ
)
,
t
)
]
.
{\displaystyle {\begin{bmatrix}{\dot {r}}\\{\dot {\phi }}\end{bmatrix}}=\varepsilon {\begin{bmatrix}\cos(t-\phi )g(r\sin(t-\phi ),r\cos(t-\phi ),t)\\{\frac {1}{r}}\sin(t-\phi )g(r\sin(t-\phi ),r\cos(t-\phi ),t)\end{bmatrix}}.}
If
g
∈
C
1
{\displaystyle g\in C^{1}}
we may apply averaging so long as a neighborhood of the origin is excluded (since the polar coordinates fail):
f
¯
1
1
(
r
)
=
1
2
π
∫
0
2
π
cos
(
s
−
ϕ
)
g
(
r
sin
(
s
−
ϕ
)
,
r
cos
(
s
−
ϕ
)
,
s
)
d
s
f
¯
2
1
(
r
)
=
1
2
π
r
∫
0
2
π
sin
(
s
−
ϕ
)
g
(
r
sin
(
s
−
ϕ
)
,
r
cos
(
s
−
ϕ
)
,
s
)
d
s
,
{\displaystyle {\begin{array}{rcl}{\bar {f}}_{1}^{1}(r)&=&\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }\cos(s-\phi )g(r\sin(s-\phi ),r\cos(s-\phi ),s)ds\\[4pt]{\bar {f}}_{2}^{1}(r)&=&\displaystyle {\frac {1}{2\pi r}}\int _{0}^{2\pi }\sin(s-\phi )g(r\sin(s-\phi ),r\cos(s-\phi ),s)ds,\end{array}}}
where the averaged system is
r
¯
˙
=
ε
f
¯
1
1
(
r
¯
)
ϕ
¯
˙
=
ε
f
¯
2
1
(
r
¯
)
.
{\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}=\varepsilon {\bar {f}}_{1}^{1}({\bar {r}})\\{\dot {\bar {\phi }}}=\varepsilon {\bar {f}}_{2}^{1}({\bar {r}}).\end{array}}}
= Example: Misleading averaging results
=The method contains some assumptions and restrictions. These limitations play important role when we average the original equation which is not into the standard form, and we can discuss counterexample of it. The following example in order to discourage this hurried averaging:
z
¨
+
4
ε
cos
2
(
t
)
z
˙
+
z
=
0
,
z
(
0
)
=
0
,
z
˙
(
0
)
=
1
,
{\displaystyle {\ddot {z}}+4\varepsilon \cos ^{2}{(t)}{\dot {z}}+z=0,\qquad z(0)=0,\quad {\dot {z}}(0)=1,}
where we put
g
(
z
,
z
˙
,
t
)
=
−
4
cos
2
(
t
)
z
˙
{\displaystyle g(z,{\dot {z}},t)=-4\cos ^{2}(t){\dot {z}}}
following the previous notation.
This systems corresponds to a damped harmonic oscillator where the damping term oscillates between
0
{\displaystyle 0}
and
4
ε
{\displaystyle 4\varepsilon }
. Averaging the friction term over one cycle of
2
π
{\displaystyle 2\pi }
yields the equation:
z
¯
¨
+
2
ε
z
¯
˙
+
z
¯
=
0
,
z
¯
(
0
)
=
0
,
z
¯
˙
(
0
)
=
1.
{\displaystyle {\ddot {\bar {z}}}+2\varepsilon {\dot {\bar {z}}}+{\bar {z}}=0,\qquad {\bar {z}}(0)=0,\quad {\dot {\bar {z}}}(0)=1.}
The solution is
z
¯
(
t
)
=
1
(
1
−
ε
2
)
1
2
e
−
ε
t
sin
(
(
1
−
ε
2
)
1
2
t
)
.
{\displaystyle {\bar {z}}(t)={\frac {1}{(1-\varepsilon ^{2})^{\frac {1}{2}}}}e^{-\varepsilon t}\sin {((1-\varepsilon ^{2})^{\frac {1}{2}}t)}.}
which the convergence rate to the origin is
ε
{\displaystyle \varepsilon }
. The averaged system obtained from the standard form yields:
r
¯
˙
=
−
1
2
ε
r
¯
(
2
+
cos
(
2
ϕ
¯
)
)
,
r
¯
(
0
)
=
1
ϕ
¯
˙
=
1
2
ε
sin
(
2
ϕ
¯
)
,
ϕ
¯
(
0
)
=
0
,
{\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}=-{\frac {1}{2}}\varepsilon {\bar {r}}(2+\cos(2{\bar {\phi }})),~{\bar {r}}(0)=1\\{\dot {\bar {\phi }}}={\frac {1}{2}}\varepsilon \sin(2{\bar {\phi }}),~{\bar {\phi }}(0)=0,\end{array}}}
which in the rectangular coordinate shows explicitly that indeed the rate of convergence to the origin is
3
2
ε
{\textstyle {\frac {3}{2}}\varepsilon }
differing from the previous crude averaged system:
y
(
t
)
=
e
−
3
2
ε
t
sin
t
{\displaystyle y(t)=e^{-{\frac {3}{2}}\varepsilon t}\sin {t}}
= Example: Van der Pol Equation
=Van der Pol was concerned with obtaining approximate solutions for equations of the type
z
¨
+
ε
(
1
−
z
2
)
z
˙
+
z
=
0
,
{\displaystyle {\ddot {z}}+\varepsilon (1-z^{2}){\dot {z}}+z=0,}
where
g
(
z
,
z
˙
,
t
)
=
(
1
−
z
2
)
z
˙
{\displaystyle g(z,{\dot {z}},t)=(1-z^{2}){\dot {z}}}
following the previous notation. This system is often called the Van der Pol oscillator. Applying periodic averaging to this nonlinear oscillator provides qualitative knowledge of the phase space without solving the system explicitly.
The averaged system is
r
¯
˙
=
1
2
ε
r
¯
(
1
−
1
4
r
¯
2
)
ϕ
¯
˙
=
0
,
{\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}={\frac {1}{2}}\varepsilon {\bar {r}}(1-{\frac {1}{4}}{\bar {r}}^{2})\\{\dot {\bar {\phi }}}=0,\end{array}}}
and we can analyze the fixed points and their stability. There is an unstable fixed point at the origin and a stable limit cycle represented by
r
¯
=
2
{\displaystyle {\bar {r}}=2}
.
The existence of such stable limit-cycle can be stated as a theorem.
Theorem (Existence of a periodic orbit): If
p
0
{\displaystyle p_{0}}
is a hyperbolic fixed point of
y
˙
=
ε
f
¯
1
(
y
)
{\displaystyle {\dot {y}}=\varepsilon {\bar {f}}^{1}(y)}
Then there exists
ε
0
>
0
{\displaystyle \varepsilon _{0}>0}
such that for all
0
<
ε
<
ε
0
{\displaystyle 0<\varepsilon <\varepsilon _{0}}
,
x
˙
=
ε
f
1
(
x
,
t
)
+
ε
2
f
[
2
]
(
x
,
t
,
ε
)
{\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon )}
has a unique hyperbolic periodic orbit
γ
ε
(
t
)
=
p
0
+
O
(
ε
)
{\displaystyle \gamma _{\varepsilon }(t)=p_{0}+{\mathcal {O}}(\varepsilon )}
of the same stability type as
p
0
{\displaystyle p_{0}}
.
The proof can be found at Guckenheimer and Holmes, Sanders et al. and for the angle case in Chicone.
= Example: Restricting the time interval
=The average theorem assumes existence of a connected and bounded region
D
⊂
R
n
{\displaystyle D\subset \mathbb {R} ^{n}}
which affects the time interval
L
{\displaystyle L}
of the result validity. The following example points it out. Consider the
z
¨
+
z
=
8
ε
cos
(
t
)
z
˙
2
,
z
(
0
)
=
0
,
z
˙
(
0
)
=
1
,
{\displaystyle {\ddot {z}}+z=8\varepsilon \cos {(t)}{\dot {z}}^{2},~z(0)=0,~{\dot {z}}(0)=1,}
where
g
(
z
,
z
˙
,
t
)
=
8
z
˙
2
cos
(
t
)
{\displaystyle g(z,{\dot {z}},t)=8{\dot {z}}^{2}\cos(t)}
. The averaged system consists of
r
¯
˙
=
3
ε
r
¯
2
cos
(
ϕ
¯
)
,
r
¯
(
0
)
=
1
ϕ
¯
˙
=
−
ε
r
¯
sin
(
ϕ
¯
)
,
ϕ
¯
(
0
)
=
0
,
{\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}=3\varepsilon {\bar {r}}^{2}\cos({\bar {\phi }}),~{\bar {r}}(0)=1\\{\dot {\bar {\phi }}}=-\varepsilon {\bar {r}}\sin({\bar {\phi }}),~{\bar {\phi }}(0)=0,\end{array}}}
which under this initial condition indicates that the original solution behaves like
z
(
t
)
=
sin
(
t
)
1
−
3
ε
t
+
O
(
ε
)
,
{\displaystyle z(t)={\frac {\sin(t)}{1-3\varepsilon t}}+{\mathcal {O}}(\varepsilon ),}
where it holds on a bounded region over
0
≤
ε
t
≤
L
<
1
3
{\displaystyle 0\leq \varepsilon t\leq L<{\frac {1}{3}}}
.
= Damped Pendulum
=Consider a damped pendulum whose point of suspension is vibrated vertically by a small amplitude, high frequency signal (this is usually known as dithering). The equation of motion for such a pendulum is given by
m
(
l
θ
¨
−
a
k
ω
2
sin
ω
t
sin
θ
)
=
−
m
g
sin
θ
−
k
(
l
θ
˙
+
a
ω
cos
ω
t
sin
θ
)
{\displaystyle m(l{\ddot {\theta }}-ak\omega ^{2}\sin \omega t\sin \theta )=-mg\sin \theta -k(l{\dot {\theta }}+a\omega \cos \omega t\sin \theta )}
where
a
sin
ω
t
{\displaystyle a\sin \omega t}
describes the motion of the suspension point,
k
{\displaystyle k}
describes the damping of the pendulum, and
θ
{\displaystyle \theta }
is the angle made by the pendulum with the vertical.
The phase space form of this equation is given by
t
˙
=
1
θ
˙
=
p
p
˙
=
1
m
l
(
m
a
k
ω
2
sin
ω
t
sin
θ
−
m
g
sin
θ
−
k
(
l
p
+
a
ω
cos
ω
t
sin
θ
)
)
{\displaystyle {\begin{aligned}{\dot {t}}&=1\\{\dot {\theta }}&=p\\{\dot {p}}&={\frac {1}{ml}}(mak\omega ^{2}\sin \omega t\sin \theta -mg\sin \theta -k(lp+a\omega \cos \omega t\sin \theta ))\end{aligned}}}
where we have introduced the variable
p
{\displaystyle p}
and written the system as an autonomous, first-order system in
(
t
,
θ
,
p
)
{\displaystyle (t,\theta ,p)}
-space.
Suppose that the angular frequency of the vertical vibrations,
ω
{\displaystyle \omega }
, is much greater than the natural frequency of the pendulum,
g
/
l
{\textstyle {\sqrt {g/l}}}
. Suppose also that the amplitude of the vertical vibrations,
a
{\displaystyle a}
, is much less than the length
l
{\displaystyle l}
of the pendulum. The pendulum's trajectory in phase space will trace out a spiral around a curve
C
{\displaystyle C}
, moving along
C
{\displaystyle C}
at the slow rate
g
/
l
{\displaystyle {\sqrt {g/l}}}
but moving around it at the fast rate
ω
{\displaystyle \omega }
. The radius of the spiral around
C
{\displaystyle C}
will be small and proportional to
a
{\displaystyle a}
. The average behaviour of the trajectory, over a timescale much larger than
2
π
/
ω
{\displaystyle 2\pi /\omega }
, will be to follow the curve
C
{\displaystyle C}
.
Extension error estimates
Average technique for initial value problems has been treated up to now with an validity error estimates of order
1
/
ε
{\displaystyle 1/\varepsilon }
. However, there are circumstances where the estimates can be extended for further times, even the case for all times. Below we deal with a system containing an asymptotically stable fixed point. Such situation recapitulates what is illustrated in Figure 1.
Theorem (Eckhaus /Sanchez-Palencia ) Consider the initial value problem
x
˙
=
ε
f
1
(
x
,
t
)
,
x
0
∈
D
⊆
R
n
,
0
≤
ε
≪
1.
{\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t),\qquad x_{0}\in D\subseteq \mathbb {R} ^{n},\quad 0\leq \varepsilon \ll 1.}
Suppose
y
˙
=
ε
lim
T
→
∞
1
T
∫
0
T
f
1
(
y
,
s
)
d
s
=:
ε
f
¯
1
(
y
)
,
y
(
0
,
ε
)
=
x
0
{\displaystyle {\dot {y}}=\varepsilon \lim _{T\to \infty }{\frac {1}{T}}\int _{0}^{T}f^{1}(y,s)~ds=:\varepsilon {\bar {f}}^{1}(y),\quad y(0,\varepsilon )=x_{0}}
exists and contains an asymptotically stable fixed point
y
=
0
{\displaystyle y=0}
in the linear approximation. Moreover,
f
¯
1
{\displaystyle {\bar {f}}^{1}}
is continuously differentiable with respect to
y
{\displaystyle y}
in
D
{\displaystyle D}
and has a domain of attraction
D
0
⊂
D
{\displaystyle D^{0}\subset D}
. For any compact
K
⊂
D
0
{\displaystyle K\subset D^{0}}
and for all
x
0
∈
K
{\displaystyle x_{0}\in K}
‖
x
(
t
)
−
y
(
t
)
‖
=
O
(
δ
(
ε
)
)
,
0
≤
t
<
∞
,
{\displaystyle \|x(t)-y(t)\|={\mathcal {O}}(\delta (\varepsilon )),\quad 0\leq t<\infty ,}
with
δ
(
ε
)
=
o
(
1
)
{\displaystyle \delta (\varepsilon )=o(1)}
in the general case and
O
(
ε
)
{\displaystyle {\mathcal {O}}(\varepsilon )}
in the periodic case.