- Source: Autonomous system (mathematics)
- Daftar ilmuwan komputer
- Optimisasi multiobjektif
- Autonomous system (mathematics)
- Autonomous system
- Non-autonomous system (mathematics)
- List of dynamical systems and differential equations topics
- Time-invariant system
- Relativistic system (mathematics)
- Moore machine
- Non-autonomous mechanics
- Mathematics
- Matrix differential equation
In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems.
Many laws in physics, where the independent variable is usually assumed to be time, are expressed as autonomous systems because it is assumed the laws of nature which hold now are identical to those for any point in the past or future.
Definition
An autonomous system is a system of ordinary differential equations of the form
d
d
t
x
(
t
)
=
f
(
x
(
t
)
)
{\displaystyle {\frac {d}{dt}}x(t)=f(x(t))}
where x takes values in n-dimensional Euclidean space; t is often interpreted as time.
It is distinguished from systems of differential equations of the form
d
d
t
x
(
t
)
=
g
(
x
(
t
)
,
t
)
{\displaystyle {\frac {d}{dt}}x(t)=g(x(t),t)}
in which the law governing the evolution of the system does not depend solely on the system's current state but also the parameter t, again often interpreted as time; such systems are by definition not autonomous.
Properties
Solutions are invariant under horizontal translations:
Let
x
1
(
t
)
{\displaystyle x_{1}(t)}
be a unique solution of the initial value problem for an autonomous system
d
d
t
x
(
t
)
=
f
(
x
(
t
)
)
,
x
(
0
)
=
x
0
.
{\displaystyle {\frac {d}{dt}}x(t)=f(x(t))\,,\quad x(0)=x_{0}.}
Then
x
2
(
t
)
=
x
1
(
t
−
t
0
)
{\displaystyle x_{2}(t)=x_{1}(t-t_{0})}
solves
d
d
t
x
(
t
)
=
f
(
x
(
t
)
)
,
x
(
t
0
)
=
x
0
.
{\displaystyle {\frac {d}{dt}}x(t)=f(x(t))\,,\quad x(t_{0})=x_{0}.}
Denoting
s
=
t
−
t
0
{\displaystyle s=t-t_{0}}
gets
x
1
(
s
)
=
x
2
(
t
)
{\displaystyle x_{1}(s)=x_{2}(t)}
and
d
s
=
d
t
{\displaystyle ds=dt}
, thus
d
d
t
x
2
(
t
)
=
d
d
t
x
1
(
t
−
t
0
)
=
d
d
s
x
1
(
s
)
=
f
(
x
1
(
s
)
)
=
f
(
x
2
(
t
)
)
.
{\displaystyle {\frac {d}{dt}}x_{2}(t)={\frac {d}{dt}}x_{1}(t-t_{0})={\frac {d}{ds}}x_{1}(s)=f(x_{1}(s))=f(x_{2}(t)).}
For the initial condition, the verification is trivial,
x
2
(
t
0
)
=
x
1
(
t
0
−
t
0
)
=
x
1
(
0
)
=
x
0
.
{\displaystyle x_{2}(t_{0})=x_{1}(t_{0}-t_{0})=x_{1}(0)=x_{0}.}
Example
The equation
y
′
=
(
2
−
y
)
y
{\displaystyle y'=\left(2-y\right)y}
is autonomous, since the independent variable (
x
{\displaystyle x}
) does not explicitly appear in the equation.
To plot the slope field and isocline for this equation, one can use the following code in GNU Octave/MATLAB
One can observe from the plot that the function
(
2
−
y
)
y
{\displaystyle \left(2-y\right)y}
is
x
{\displaystyle x}
-invariant, and so is the shape of the solution, i.e.
y
(
x
)
=
y
(
x
−
x
0
)
{\displaystyle y(x)=y(x-x_{0})}
for any shift
x
0
{\displaystyle x_{0}}
.
Solving the equation symbolically in MATLAB, by running
obtains two equilibrium solutions,
y
=
0
{\displaystyle y=0}
and
y
=
2
{\displaystyle y=2}
, and a third solution involving an unknown constant
C
3
{\displaystyle C_{3}}
,
-2 / (exp(C3 - 2 * x) - 1).
Picking up some specific values for the initial condition, one can add the plot of several solutions
Qualitative analysis
Autonomous systems can be analyzed qualitatively using the phase space; in the one-variable case, this is the phase line.
Solution techniques
The following techniques apply to one-dimensional autonomous differential equations. Any one-dimensional equation of order
n
{\displaystyle n}
is equivalent to an
n
{\displaystyle n}
-dimensional first-order system (as described in reduction to a first-order system), but not necessarily vice versa.
= First order
=The first-order autonomous equation
d
x
d
t
=
f
(
x
)
{\displaystyle {\frac {dx}{dt}}=f(x)}
is separable, so it can be solved by rearranging it into the integral form
t
+
C
=
∫
d
x
f
(
x
)
{\displaystyle t+C=\int {\frac {dx}{f(x)}}}
= Second order
=The second-order autonomous equation
d
2
x
d
t
2
=
f
(
x
,
x
′
)
{\displaystyle {\frac {d^{2}x}{dt^{2}}}=f(x,x')}
is more difficult, but it can be solved by introducing the new variable
v
=
d
x
d
t
{\displaystyle v={\frac {dx}{dt}}}
and expressing the second derivative of
x
{\displaystyle x}
via the chain rule as
d
2
x
d
t
2
=
d
v
d
t
=
d
x
d
t
d
v
d
x
=
v
d
v
d
x
{\displaystyle {\frac {d^{2}x}{dt^{2}}}={\frac {dv}{dt}}={\frac {dx}{dt}}{\frac {dv}{dx}}=v{\frac {dv}{dx}}}
so that the original equation becomes
v
d
v
d
x
=
f
(
x
,
v
)
{\displaystyle v{\frac {dv}{dx}}=f(x,v)}
which is a first order equation containing no reference to the independent variable
t
{\displaystyle t}
. Solving provides
v
{\displaystyle v}
as a function of
x
{\displaystyle x}
. Then, recalling the definition of
v
{\displaystyle v}
:
d
x
d
t
=
v
(
x
)
⇒
t
+
C
=
∫
d
x
v
(
x
)
{\displaystyle {\frac {dx}{dt}}=v(x)\quad \Rightarrow \quad t+C=\int {\frac {dx}{v(x)}}}
which is an implicit solution.
Special case: x″ = f(x)
The special case where
f
{\displaystyle f}
is independent of
x
′
{\displaystyle x'}
d
2
x
d
t
2
=
f
(
x
)
{\displaystyle {\frac {d^{2}x}{dt^{2}}}=f(x)}
benefits from separate treatment. These types of equations are very common in classical mechanics because they are always Hamiltonian systems.
The idea is to make use of the identity
d
x
d
t
=
(
d
t
d
x
)
−
1
{\displaystyle {\frac {dx}{dt}}=\left({\frac {dt}{dx}}\right)^{-1}}
which follows from the chain rule, barring any issues due to division by zero.
By inverting both sides of a first order autonomous system, one can immediately integrate with respect to
x
{\displaystyle x}
:
d
x
d
t
=
f
(
x
)
⇒
d
t
d
x
=
1
f
(
x
)
⇒
t
+
C
=
∫
d
x
f
(
x
)
{\displaystyle {\frac {dx}{dt}}=f(x)\quad \Rightarrow \quad {\frac {dt}{dx}}={\frac {1}{f(x)}}\quad \Rightarrow \quad t+C=\int {\frac {dx}{f(x)}}}
which is another way to view the separation of variables technique. The second derivative must be expressed as a derivative with respect to
x
{\displaystyle x}
instead of
t
{\displaystyle t}
:
d
2
x
d
t
2
=
d
d
t
(
d
x
d
t
)
=
d
d
x
(
d
x
d
t
)
d
x
d
t
=
d
d
x
(
(
d
t
d
x
)
−
1
)
(
d
t
d
x
)
−
1
=
−
(
d
t
d
x
)
−
2
d
2
t
d
x
2
(
d
t
d
x
)
−
1
=
−
(
d
t
d
x
)
−
3
d
2
t
d
x
2
=
d
d
x
(
1
2
(
d
t
d
x
)
−
2
)
{\displaystyle {\begin{aligned}{\frac {d^{2}x}{dt^{2}}}&={\frac {d}{dt}}\left({\frac {dx}{dt}}\right)={\frac {d}{dx}}\left({\frac {dx}{dt}}\right){\frac {dx}{dt}}\\[4pt]&={\frac {d}{dx}}\left(\left({\frac {dt}{dx}}\right)^{-1}\right)\left({\frac {dt}{dx}}\right)^{-1}\\[4pt]&=-\left({\frac {dt}{dx}}\right)^{-2}{\frac {d^{2}t}{dx^{2}}}\left({\frac {dt}{dx}}\right)^{-1}=-\left({\frac {dt}{dx}}\right)^{-3}{\frac {d^{2}t}{dx^{2}}}\\[4pt]&={\frac {d}{dx}}\left({\frac {1}{2}}\left({\frac {dt}{dx}}\right)^{-2}\right)\end{aligned}}}
To reemphasize: what's been accomplished is that the second derivative with respect to
t
{\displaystyle t}
has been expressed as a derivative of
x
{\displaystyle x}
. The original second order equation can now be integrated:
d
2
x
d
t
2
=
f
(
x
)
d
d
x
(
1
2
(
d
t
d
x
)
−
2
)
=
f
(
x
)
(
d
t
d
x
)
−
2
=
2
∫
f
(
x
)
d
x
+
C
1
d
t
d
x
=
±
1
2
∫
f
(
x
)
d
x
+
C
1
t
+
C
2
=
±
∫
d
x
2
∫
f
(
x
)
d
x
+
C
1
{\displaystyle {\begin{aligned}{\frac {d^{2}x}{dt^{2}}}&=f(x)\\{\frac {d}{dx}}\left({\frac {1}{2}}\left({\frac {dt}{dx}}\right)^{-2}\right)&=f(x)\\\left({\frac {dt}{dx}}\right)^{-2}&=2\int f(x)dx+C_{1}\\{\frac {dt}{dx}}&=\pm {\frac {1}{\sqrt {2\int f(x)dx+C_{1}}}}\\t+C_{2}&=\pm \int {\frac {dx}{\sqrt {2\int f(x)dx+C_{1}}}}\end{aligned}}}
This is an implicit solution. The greatest potential problem is inability to simplify the integrals, which implies difficulty or impossibility in evaluating the integration constants.
Special case: x″ = x′n f(x)
Using the above approach, the technique can extend to the more general equation
d
2
x
d
t
2
=
(
d
x
d
t
)
n
f
(
x
)
{\displaystyle {\frac {d^{2}x}{dt^{2}}}=\left({\frac {dx}{dt}}\right)^{n}f(x)}
where
n
{\displaystyle n}
is some parameter not equal to two. This will work since the second derivative can be written in a form involving a power of
x
′
{\displaystyle x'}
. Rewriting the second derivative, rearranging, and expressing the left side as a derivative:
−
(
d
t
d
x
)
−
3
d
2
t
d
x
2
=
(
d
t
d
x
)
−
n
f
(
x
)
−
(
d
t
d
x
)
n
−
3
d
2
t
d
x
2
=
f
(
x
)
d
d
x
(
1
2
−
n
(
d
t
d
x
)
n
−
2
)
=
f
(
x
)
(
d
t
d
x
)
n
−
2
=
(
2
−
n
)
∫
f
(
x
)
d
x
+
C
1
t
+
C
2
=
∫
(
(
2
−
n
)
∫
f
(
x
)
d
x
+
C
1
)
1
n
−
2
d
x
{\displaystyle {\begin{aligned}&-\left({\frac {dt}{dx}}\right)^{-3}{\frac {d^{2}t}{dx^{2}}}=\left({\frac {dt}{dx}}\right)^{-n}f(x)\\[4pt]&-\left({\frac {dt}{dx}}\right)^{n-3}{\frac {d^{2}t}{dx^{2}}}=f(x)\\[4pt]&{\frac {d}{dx}}\left({\frac {1}{2-n}}\left({\frac {dt}{dx}}\right)^{n-2}\right)=f(x)\\[4pt]&\left({\frac {dt}{dx}}\right)^{n-2}=(2-n)\int f(x)dx+C_{1}\\[2pt]&t+C_{2}=\int \left((2-n)\int f(x)dx+C_{1}\right)^{\frac {1}{n-2}}dx\end{aligned}}}
The right will carry +/− if
n
{\displaystyle n}
is even. The treatment must be different if
n
=
2
{\displaystyle n=2}
:
−
(
d
t
d
x
)
−
1
d
2
t
d
x
2
=
f
(
x
)
−
d
d
x
(
ln
(
d
t
d
x
)
)
=
f
(
x
)
d
t
d
x
=
C
1
e
−
∫
f
(
x
)
d
x
t
+
C
2
=
C
1
∫
e
−
∫
f
(
x
)
d
x
d
x
{\displaystyle {\begin{aligned}-\left({\frac {dt}{dx}}\right)^{-1}{\frac {d^{2}t}{dx^{2}}}&=f(x)\\-{\frac {d}{dx}}\left(\ln \left({\frac {dt}{dx}}\right)\right)&=f(x)\\{\frac {dt}{dx}}&=C_{1}e^{-\int f(x)dx}\\t+C_{2}&=C_{1}\int e^{-\int f(x)dx}dx\end{aligned}}}
= Higher orders
=There is no analogous method for solving third- or higher-order autonomous equations. Such equations can only be solved exactly if they happen to have some other simplifying property, for instance linearity or dependence of the right side of the equation on the dependent variable only (i.e., not its derivatives). This should not be surprising, considering that nonlinear autonomous systems in three dimensions can produce truly chaotic behavior such as the Lorenz attractor and the Rössler attractor.
Likewise, general non-autonomous equations of second order are unsolvable explicitly, since these can also be chaotic, as in a periodically forced pendulum.
= Multivariate case
=In
x
′
(
t
)
=
A
x
(
t
)
{\displaystyle \mathbf {x} '(t)=A\mathbf {x} (t)}
, where
x
(
t
)
{\displaystyle \mathbf {x} (t)}
is an
n
{\displaystyle n}
-dimensional column vector dependent on
t
{\displaystyle t}
.
The solution is
x
(
t
)
=
e
A
t
c
{\displaystyle \mathbf {x} (t)=e^{At}\mathbf {c} }
where
c
{\displaystyle \mathbf {c} }
is an
n
×
1
{\displaystyle n\times 1}
constant vector.
= Finite durations
=For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stay there in zero forever after. These finite-duration solutions cannot be analytical functions on the whole real line, and because they will be non-Lipschitz functions at the ending time, they don't stand uniqueness of solutions of Lipschitz differential equations.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
x
)
=
1
4
(
1
−
x
2
+
|
1
−
x
2
|
)
2
{\displaystyle y(x)={\frac {1}{4}}\left(1-{\frac {x}{2}}+\left|1-{\frac {x}{2}}\right|\right)^{2}}
See also
Non-autonomous system (mathematics)