- Source: Lyapunov equation
The Lyapunov equation, named after the Russian mathematician Aleksandr Lyapunov, is a matrix equation used in the stability analysis of linear dynamical systems.
In particular, the discrete-time Lyapunov equation (also known as Stein equation) for
X
{\displaystyle X}
is
A
X
A
H
−
X
+
Q
=
0
{\displaystyle AXA^{H}-X+Q=0}
where
Q
{\displaystyle Q}
is a Hermitian matrix and
A
H
{\displaystyle A^{H}}
is the conjugate transpose of
A
{\displaystyle A}
, while the continuous-time Lyapunov equation is
A
X
+
X
A
H
+
Q
=
0
{\displaystyle AX+XA^{H}+Q=0}
.
Application to stability
In the following theorems
A
,
P
,
Q
∈
R
n
×
n
{\displaystyle A,P,Q\in \mathbb {R} ^{n\times n}}
, and
P
{\displaystyle P}
and
Q
{\displaystyle Q}
are symmetric. The notation
P
>
0
{\displaystyle P>0}
means that the matrix
P
{\displaystyle P}
is positive definite.
Theorem (continuous time version). Given any
Q
>
0
{\displaystyle Q>0}
, there exists a unique
P
>
0
{\displaystyle P>0}
satisfying
A
T
P
+
P
A
+
Q
=
0
{\displaystyle A^{T}P+PA+Q=0}
if and only if the linear system
x
˙
=
A
x
{\displaystyle {\dot {x}}=Ax}
is globally asymptotically stable. The quadratic function
V
(
x
)
=
x
T
P
x
{\displaystyle V(x)=x^{T}Px}
is a Lyapunov function that can be used to verify stability.
Theorem (discrete time version). Given any
Q
>
0
{\displaystyle Q>0}
, there exists a unique
P
>
0
{\displaystyle P>0}
satisfying
A
T
P
A
−
P
+
Q
=
0
{\displaystyle A^{T}PA-P+Q=0}
if and only if the linear system
x
t
+
1
=
A
x
t
{\displaystyle x_{t+1}=Ax_{t}}
is globally asymptotically stable. As before,
x
T
P
x
{\displaystyle x^{T}Px}
is a Lyapunov function.
Computational aspects of solution
The Lyapunov equation is linear; therefore, if
X
{\displaystyle X}
contains
n
{\displaystyle n}
entries, the equation can be solved in
O
(
n
3
)
{\displaystyle {\mathcal {O}}(n^{3})}
time using standard matrix factorization methods.
However, specialized algorithms are available which can yield solutions much quicker owing to the specific structure of the Lyapunov equation. For the discrete case, the Schur method of Kitagawa is often used. For the continuous Lyapunov equation the Bartels–Stewart algorithm can be used.
Analytic solution
Defining the vectorization operator
vec
(
A
)
{\displaystyle \operatorname {vec} (A)}
as stacking the columns of a matrix
A
{\displaystyle A}
and
A
⊗
B
{\displaystyle A\otimes B}
as the Kronecker product of
A
{\displaystyle A}
and
B
{\displaystyle B}
, the continuous time and discrete time Lyapunov equations can be expressed as solutions of a matrix equation. Furthermore, if the matrix
A
{\displaystyle A}
is "stable", the solution can also be expressed as an integral (continuous time case) or as an infinite sum (discrete time case).
= Discrete time
=Using the result that
vec
(
A
B
C
)
=
(
C
T
⊗
A
)
vec
(
B
)
{\displaystyle \operatorname {vec} (ABC)=(C^{T}\otimes A)\operatorname {vec} (B)}
, one has
(
I
n
2
−
A
¯
⊗
A
)
vec
(
X
)
=
vec
(
Q
)
{\displaystyle (I_{n^{2}}-{\bar {A}}\otimes A)\operatorname {vec} (X)=\operatorname {vec} (Q)}
where
I
n
2
{\displaystyle I_{n^{2}}}
is a conformable identity matrix and
A
¯
{\displaystyle {\bar {A}}}
is the element-wise complex conjugate of
A
{\displaystyle A}
. One may then solve for
vec
(
X
)
{\displaystyle \operatorname {vec} (X)}
by inverting or solving the linear equations. To get
X
{\displaystyle X}
, one must just reshape
vec
(
X
)
{\displaystyle \operatorname {vec} (X)}
appropriately.
Moreover, if
A
{\displaystyle A}
is stable (in the sense of Schur stability, i.e., having eigenvalues with magnitude less than 1), the solution
X
{\displaystyle X}
can also be written as
X
=
∑
k
=
0
∞
A
k
Q
(
A
H
)
k
{\displaystyle X=\sum _{k=0}^{\infty }A^{k}Q(A^{H})^{k}}
.
For comparison, consider the one-dimensional case, where this just says that the solution of
(
1
−
a
2
)
x
=
q
{\displaystyle (1-a^{2})x=q}
is
x
=
q
1
−
a
2
=
∑
k
=
0
∞
q
a
2
k
{\displaystyle x={\frac {q}{1-a^{2}}}=\sum _{k=0}^{\infty }qa^{2k}}
.
= Continuous time
=Using again the Kronecker product notation and the vectorization operator, one has the matrix equation
(
I
n
⊗
A
+
A
¯
⊗
I
n
)
vec
X
=
−
vec
Q
,
{\displaystyle (I_{n}\otimes A+{\bar {A}}\otimes I_{n})\operatorname {vec} X=-\operatorname {vec} Q,}
where
A
¯
{\displaystyle {\bar {A}}}
denotes the matrix obtained by complex conjugating the entries of
A
{\displaystyle A}
.
Similar to the discrete-time case, if
A
{\displaystyle A}
is stable (in the sense of Hurwitz stability, i.e., having eigenvalues with negative real parts), the solution
X
{\displaystyle X}
can also be written as
X
=
∫
0
∞
e
A
τ
Q
e
A
H
τ
d
τ
{\displaystyle X=\int _{0}^{\infty }{e}^{A\tau }Q\mathrm {e} ^{A^{H}\tau }d\tau }
,
which holds because
A
X
+
X
A
H
=
∫
0
∞
A
e
A
τ
Q
e
A
H
τ
+
e
A
τ
Q
e
A
H
τ
A
H
d
τ
=
∫
0
∞
d
d
τ
e
A
τ
Q
e
A
H
τ
d
τ
=
e
A
τ
Q
e
A
H
τ
|
0
∞
=
−
Q
.
{\displaystyle {\begin{aligned}AX+XA^{H}=&\int _{0}^{\infty }A{e}^{A\tau }Q\mathrm {e} ^{A^{H}\tau }+{e}^{A\tau }Q\mathrm {e} ^{A^{H}\tau }A^{H}d\tau \\=&\int _{0}^{\infty }{\frac {d}{d\tau }}{e}^{A\tau }Q\mathrm {e} ^{A^{H}\tau }d\tau \\=&{e}^{A\tau }Q\mathrm {e} ^{A^{H}\tau }{\bigg |}_{0}^{\infty }\\=&-Q.\end{aligned}}}
For comparison, consider the one-dimensional case, where this just says that the solution of
2
a
x
=
−
q
{\displaystyle 2ax=-q}
is
x
=
−
q
2
a
=
∫
0
∞
q
e
2
a
τ
d
τ
{\displaystyle x={\frac {-q}{2a}}=\int _{0}^{\infty }q{e}^{2a\tau }d\tau }
.
Relationship Between Discrete and Continuous Lyapunov Equations
We start with the continuous-time linear dynamics:
x
˙
=
A
x
{\displaystyle {\dot {\mathbf {x} }}=\mathbf {A} \mathbf {x} }
.
And then discretize it as follows:
x
˙
≈
x
t
+
1
−
x
t
δ
{\displaystyle {\dot {\mathbf {x} }}\approx {\frac {\mathbf {x} _{t+1}-\mathbf {x} _{t}}{\delta }}}
Where
δ
>
0
{\displaystyle \delta >0}
indicates a small forward displacement in time. Substituting the bottom equation into the top and shuffling terms around, we get a discrete-time equation for
x
t
+
1
{\displaystyle \mathbf {x} _{t+1}}
.
x
t
+
1
=
x
t
+
δ
A
x
t
=
(
I
+
δ
A
)
x
t
=
B
x
t
{\displaystyle \mathbf {x} _{t+1}=\mathbf {x} _{t}+\delta \mathbf {A} \mathbf {x} _{t}=(\mathbf {I} +\delta \mathbf {A} )\mathbf {x} _{t}=\mathbf {B} \mathbf {x} _{t}}
Where we've defined
B
≡
I
+
δ
A
{\displaystyle \mathbf {B} \equiv \mathbf {I} +\delta \mathbf {A} }
. Now we can use the discrete time Lyapunov equation for
B
{\displaystyle \mathbf {B} }
:
B
T
M
B
−
M
=
−
δ
Q
{\displaystyle \mathbf {B} ^{T}\mathbf {M} \mathbf {B} -\mathbf {M} =-\delta \mathbf {Q} }
Plugging in our definition for
B
{\displaystyle \mathbf {B} }
, we get:
(
I
+
δ
A
)
T
M
(
I
+
δ
A
)
−
M
=
−
δ
Q
{\displaystyle (\mathbf {I} +\delta \mathbf {A} )^{T}\mathbf {M} (\mathbf {I} +\delta \mathbf {A} )-\mathbf {M} =-\delta \mathbf {Q} }
Expanding this expression out yields:
(
M
+
δ
A
T
M
)
(
I
+
δ
A
)
−
M
=
δ
(
A
T
M
+
M
A
)
+
δ
2
A
T
M
A
=
−
δ
Q
{\displaystyle (\mathbf {M} +\delta \mathbf {A} ^{T}\mathbf {M} )(\mathbf {I} +\delta \mathbf {A} )-\mathbf {M} =\delta (\mathbf {A} ^{T}\mathbf {M} +\mathbf {M} \mathbf {A} )+\delta ^{2}\mathbf {A} ^{T}\mathbf {M} \mathbf {A} =-\delta \mathbf {Q} }
Recall that
δ
{\displaystyle \delta }
is a small displacement in time. Letting
δ
{\displaystyle \delta }
go to zero brings us closer and closer to having continuous dynamics—and in the limit we achieve them. It stands to reason that we should also recover the continuous-time Lyapunov equations in the limit as well. Dividing through by
δ
{\displaystyle \delta }
on both sides, and then letting
δ
→
0
{\displaystyle \delta \to 0}
we find that:
A
T
M
+
M
A
=
−
Q
{\displaystyle \mathbf {A} ^{T}\mathbf {M} +\mathbf {M} \mathbf {A} =-\mathbf {Q} }
which is the continuous-time Lyapunov equation, as desired.
See also
Sylvester equation, which generalizes the Lyapunov equation
Algebraic Riccati equation
Kalman filter
References
Kata Kunci Pencarian:
- Statistika
- Persamaan diferensial biasa
- Lyapunov equation
- Aleksandr Lyapunov
- Lyapunov function
- Lyapunov
- Lyapunov stability
- Lyapunov theorem
- Lyapunov exponent
- Algebraic Riccati equation
- Alternating-direction implicit method
- Competitive Lotka–Volterra equations