- Source: Backstepping
In control theory, backstepping is a technique developed circa 1990 by Petar V. Kokotovic, and others for designing stabilizing controls for a special class of nonlinear dynamical systems. These systems are built from subsystems that radiate out from an irreducible subsystem that can be stabilized using some other method. Because of this recursive structure, the designer can start the design process at the known-stable system and "back out" new controllers that progressively stabilize each outer subsystem. The process terminates when the final external control is reached. Hence, this process is known as backstepping.
Backstepping approach
The backstepping approach provides a recursive method for stabilizing the origin of a system in strict-feedback form. That is, consider a system of the form
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
z
˙
1
=
f
1
(
x
,
z
1
)
+
g
1
(
x
,
z
1
)
z
2
z
˙
2
=
f
2
(
x
,
z
1
,
z
2
)
+
g
2
(
x
,
z
1
,
z
2
)
z
3
⋮
z
˙
i
=
f
i
(
x
,
z
1
,
z
2
,
…
,
z
i
−
1
,
z
i
)
+
g
i
(
x
,
z
1
,
z
2
,
…
,
z
i
−
1
,
z
i
)
z
i
+
1
for
1
≤
i
<
k
−
1
⋮
z
˙
k
−
1
=
f
k
−
1
(
x
,
z
1
,
z
2
,
…
,
z
k
−
1
)
+
g
k
−
1
(
x
,
z
1
,
z
2
,
…
,
z
k
−
1
)
z
k
z
˙
k
=
f
k
(
x
,
z
1
,
z
2
,
…
,
z
k
−
1
,
z
k
)
+
g
k
(
x
,
z
1
,
z
2
,
…
,
z
k
−
1
,
z
k
)
u
{\displaystyle {\begin{aligned}{\begin{cases}{\dot {\mathbf {x} }}&=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\{\dot {z}}_{1}&=f_{1}(\mathbf {x} ,z_{1})+g_{1}(\mathbf {x} ,z_{1})z_{2}\\{\dot {z}}_{2}&=f_{2}(\mathbf {x} ,z_{1},z_{2})+g_{2}(\mathbf {x} ,z_{1},z_{2})z_{3}\\\vdots \\{\dot {z}}_{i}&=f_{i}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{i-1},z_{i})+g_{i}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{i-1},z_{i})z_{i+1}\quad {\text{ for }}1\leq i
where
x
∈
R
n
{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}
with
n
≥
1
{\displaystyle n\geq 1}
,
z
1
,
z
2
,
…
,
z
i
,
…
,
z
k
−
1
,
z
k
{\displaystyle z_{1},z_{2},\ldots ,z_{i},\ldots ,z_{k-1},z_{k}}
are scalars,
u is a scalar input to the system,
f
x
,
f
1
,
f
2
,
…
,
f
i
,
…
,
f
k
−
1
,
f
k
{\displaystyle f_{x},f_{1},f_{2},\ldots ,f_{i},\ldots ,f_{k-1},f_{k}}
vanish at the origin (i.e.,
f
i
(
0
,
0
,
…
,
0
)
=
0
{\displaystyle f_{i}(0,0,\dots ,0)=0}
),
g
1
,
g
2
,
…
,
g
i
,
…
,
g
k
−
1
,
g
k
{\displaystyle g_{1},g_{2},\ldots ,g_{i},\ldots ,g_{k-1},g_{k}}
are nonzero over the domain of interest (i.e.,
g
i
(
x
,
z
1
,
…
,
z
k
)
≠
0
{\displaystyle g_{i}(\mathbf {x} ,z_{1},\ldots ,z_{k})\neq 0}
for
1
≤
i
≤
k
{\displaystyle 1\leq i\leq k}
).
Also assume that the subsystem
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
{\displaystyle {\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} )}
is stabilized to the origin (i.e.,
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
) by some known control
u
x
(
x
)
{\displaystyle u_{x}(\mathbf {x} )}
such that
u
x
(
0
)
=
0
{\displaystyle u_{x}(\mathbf {0} )=0}
. It is also assumed that a Lyapunov function
V
x
{\displaystyle V_{x}}
for this stable subsystem is known. That is, this x subsystem is stabilized by some other method and backstepping extends its stability to the
z
{\displaystyle {\textbf {z}}}
shell around it.
In systems of this strict-feedback form around a stable x subsystem,
The backstepping-designed control input u has its most immediate stabilizing impact on state
z
n
{\displaystyle z_{n}}
.
The state
z
n
{\displaystyle z_{n}}
then acts like a stabilizing control on the state
z
n
−
1
{\displaystyle z_{n-1}}
before it.
This process continues so that each state
z
i
{\displaystyle z_{i}}
is stabilized by the fictitious "control"
z
i
+
1
{\displaystyle z_{i+1}}
.
The backstepping approach determines how to stabilize the x subsystem using
z
1
{\displaystyle z_{1}}
, and then proceeds with determining how to make the next state
z
2
{\displaystyle z_{2}}
drive
z
1
{\displaystyle z_{1}}
to the control required to stabilize x. Hence, the process "steps backward" from x out of the strict-feedback form system until the ultimate control u is designed.
Recursive Control Design Overview
It is given that the smaller (i.e., lower-order) subsystem
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
{\displaystyle {\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} )}
is already stabilized to the origin by some control
u
x
(
x
)
{\displaystyle u_{x}(\mathbf {x} )}
where
u
x
(
0
)
=
0
{\displaystyle u_{x}(\mathbf {0} )=0}
. That is, choice of
u
x
{\displaystyle u_{x}}
to stabilize this system must occur using some other method. It is also assumed that a Lyapunov function
V
x
{\displaystyle V_{x}}
for this stable subsystem is known. Backstepping provides a way to extend the controlled stability of this subsystem to the larger system.
A control
u
1
(
x
,
z
1
)
{\displaystyle u_{1}(\mathbf {x} ,z_{1})}
is designed so that the system
z
˙
1
=
f
1
(
x
,
z
1
)
+
g
1
(
x
,
z
1
)
u
1
(
x
,
z
1
)
{\displaystyle {\dot {z}}_{1}=f_{1}(\mathbf {x} ,z_{1})+g_{1}(\mathbf {x} ,z_{1})u_{1}(\mathbf {x} ,z_{1})}
is stabilized so that
z
1
{\displaystyle z_{1}}
follows the desired
u
x
{\displaystyle u_{x}}
control. The control design is based on the augmented Lyapunov function candidate
V
1
(
x
,
z
1
)
=
V
x
(
x
)
+
1
2
(
z
1
−
u
x
(
x
)
)
2
{\displaystyle V_{1}(\mathbf {x} ,z_{1})=V_{x}(\mathbf {x} )+{\frac {1}{2}}(z_{1}-u_{x}(\mathbf {x} ))^{2}}
The control
u
1
{\displaystyle u_{1}}
can be picked to bound
V
˙
1
{\displaystyle {\dot {V}}_{1}}
away from zero.
A control
u
2
(
x
,
z
1
,
z
2
)
{\displaystyle u_{2}(\mathbf {x} ,z_{1},z_{2})}
is designed so that the system
z
˙
2
=
f
2
(
x
,
z
1
,
z
2
)
+
g
2
(
x
,
z
1
,
z
2
)
u
2
(
x
,
z
1
,
z
2
)
{\displaystyle {\dot {z}}_{2}=f_{2}(\mathbf {x} ,z_{1},z_{2})+g_{2}(\mathbf {x} ,z_{1},z_{2})u_{2}(\mathbf {x} ,z_{1},z_{2})}
is stabilized so that
z
2
{\displaystyle z_{2}}
follows the desired
u
1
{\displaystyle u_{1}}
control. The control design is based on the augmented Lyapunov function candidate
V
2
(
x
,
z
1
,
z
2
)
=
V
1
(
x
,
z
1
)
+
1
2
(
z
2
−
u
1
(
x
,
z
1
)
)
2
{\displaystyle V_{2}(\mathbf {x} ,z_{1},z_{2})=V_{1}(\mathbf {x} ,z_{1})+{\frac {1}{2}}(z_{2}-u_{1}(\mathbf {x} ,z_{1}))^{2}}
The control
u
2
{\displaystyle u_{2}}
can be picked to bound
V
˙
2
{\displaystyle {\dot {V}}_{2}}
away from zero.
This process continues until the actual u is known, and
The real control u stabilizes
z
k
{\displaystyle z_{k}}
to fictitious control
u
k
−
1
{\displaystyle u_{k-1}}
.
The fictitious control
u
k
−
1
{\displaystyle u_{k-1}}
stabilizes
z
k
−
1
{\displaystyle z_{k-1}}
to fictitious control
u
k
−
2
{\displaystyle u_{k-2}}
.
The fictitious control
u
k
−
2
{\displaystyle u_{k-2}}
stabilizes
z
k
−
2
{\displaystyle z_{k-2}}
to fictitious control
u
k
−
3
{\displaystyle u_{k-3}}
.
...
The fictitious control
u
2
{\displaystyle u_{2}}
stabilizes
z
2
{\displaystyle z_{2}}
to fictitious control
u
1
{\displaystyle u_{1}}
.
The fictitious control
u
1
{\displaystyle u_{1}}
stabilizes
z
1
{\displaystyle z_{1}}
to fictitious control
u
x
{\displaystyle u_{x}}
.
The fictitious control
u
x
{\displaystyle u_{x}}
stabilizes x to the origin.
This process is known as backstepping because it starts with the requirements on some internal subsystem for stability and progressively steps back out of the system, maintaining stability at each step. Because
f
i
{\displaystyle f_{i}}
vanish at the origin for
0
≤
i
≤
k
{\displaystyle 0\leq i\leq k}
,
g
i
{\displaystyle g_{i}}
are nonzero for
1
≤
i
≤
k
{\displaystyle 1\leq i\leq k}
,
the given control
u
x
{\displaystyle u_{x}}
has
u
x
(
0
)
=
0
{\displaystyle u_{x}(\mathbf {0} )=0}
,
then the resulting system has an equilibrium at the origin (i.e., where
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
,
z
1
=
0
{\displaystyle z_{1}=0}
,
z
2
=
0
{\displaystyle z_{2}=0}
, ...,
z
k
−
1
=
0
{\displaystyle z_{k-1}=0}
, and
z
k
=
0
{\displaystyle z_{k}=0}
) that is globally asymptotically stable.
Integrator Backstepping
Before describing the backstepping procedure for general strict-feedback form dynamical systems, it is convenient to discuss the approach for a smaller class of strict-feedback form systems. These systems connect a series of integrators to the input of a
system with a known feedback-stabilizing control law, and so the stabilizing approach is known as integrator backstepping. With a small modification, the integrator backstepping approach can be extended to handle all strict-feedback form systems.
= Single-integrator Equilibrium
=Consider the dynamical system
where
x
∈
R
n
{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}
and
z
1
{\displaystyle z_{1}}
is a scalar. This system is a cascade connection of an integrator with the x subsystem (i.e., the input u enters an integrator, and the integral
z
1
{\displaystyle z_{1}}
enters the x subsystem).
We assume that
f
x
(
0
)
=
0
{\displaystyle f_{x}(\mathbf {0} )=0}
, and so if
u
1
=
0
{\displaystyle u_{1}=0}
,
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
and
z
1
=
0
{\displaystyle z_{1}=0}
, then
{
x
˙
=
f
x
(
0
⏟
x
)
+
(
g
x
(
0
⏟
x
)
)
(
0
⏟
z
1
)
=
0
+
(
g
x
(
0
)
)
(
0
)
=
0
(i.e.,
x
=
0
is stationary)
z
˙
1
=
0
⏞
u
1
(i.e.,
z
1
=
0
is stationary)
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\underbrace {\mathbf {0} } _{\mathbf {x} })+(g_{x}(\underbrace {\mathbf {0} } _{\mathbf {x} }))(\underbrace {0} _{z_{1}})=0+(g_{x}(\mathbf {0} ))(0)=\mathbf {0} &\quad {\text{ (i.e., }}\mathbf {x} =\mathbf {0} {\text{ is stationary)}}\\{\dot {z}}_{1}=\overbrace {0} ^{u_{1}}&\quad {\text{ (i.e., }}z_{1}=0{\text{ is stationary)}}\end{cases}}}
So the origin
(
x
,
z
1
)
=
(
0
,
0
)
{\displaystyle (\mathbf {x} ,z_{1})=(\mathbf {0} ,0)}
is an equilibrium (i.e., a stationary point) of the system. If the system ever reaches the origin, it will remain there forever after.
= Single-integrator Backstepping
=In this example, backstepping is used to stabilize the single-integrator system in Equation (1) around its equilibrium at the origin. To be less precise, we wish to design a control law
u
1
(
x
,
z
1
)
{\displaystyle u_{1}(\mathbf {x} ,z_{1})}
that ensures that the states
(
x
,
z
1
)
{\displaystyle (\mathbf {x} ,z_{1})}
return to
(
0
,
0
)
{\displaystyle (\mathbf {0} ,0)}
after the system is started from some arbitrary initial condition.
First, by assumption, the subsystem
x
˙
=
F
(
x
)
where
F
(
x
)
≜
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
{\displaystyle {\dot {\mathbf {x} }}=F(\mathbf {x} )\qquad {\text{where}}\qquad F(\mathbf {x} )\triangleq f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} )}
with
u
x
(
0
)
=
0
{\displaystyle u_{x}(\mathbf {0} )=0}
has a Lyapunov function
V
x
(
x
)
>
0
{\displaystyle V_{x}(\mathbf {x} )>0}
such that
V
˙
x
=
∂
V
x
∂
x
(
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
)
≤
−
W
(
x
)
{\displaystyle {\dot {V}}_{x}={\frac {\partial V_{x}}{\partial \mathbf {x} }}(f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} ))\leq -W(\mathbf {x} )}
where
W
(
x
)
{\displaystyle W(\mathbf {x} )}
is a positive-definite function. That is, we assume that we have already shown that this existing simpler x subsystem is stable (in the sense of Lyapunov). Roughly speaking, this notion of stability means that:
The function
V
x
{\displaystyle V_{x}}
is like a "generalized energy" of the x subsystem. As the x states of the system move away from the origin, the energy
V
x
(
x
)
{\displaystyle V_{x}(\mathbf {x} )}
also grows.
By showing that over time, the energy
V
x
(
x
(
t
)
)
{\displaystyle V_{x}(\mathbf {x} (t))}
decays to zero, then the x states must decay toward
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
. That is, the origin
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
will be a stable equilibrium of the system – the x states will continuously approach the origin as time increases.
Saying that
W
(
x
)
{\displaystyle W(\mathbf {x} )}
is positive definite means that
W
(
x
)
>
0
{\displaystyle W(\mathbf {x} )>0}
everywhere except for
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
, and
W
(
0
)
=
0
{\displaystyle W(\mathbf {0} )=0}
.
The statement that
V
˙
x
≤
−
W
(
x
)
{\displaystyle {\dot {V}}_{x}\leq -W(\mathbf {x} )}
means that
V
˙
x
{\displaystyle {\dot {V}}_{x}}
is bounded away from zero for all points except where
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
. That is, so long as the system is not at its equilibrium at the origin, its "energy" will be decreasing.
Because the energy is always decaying, then the system must be stable; its trajectories must approach the origin.
Our task is to find a control u that makes our cascaded
(
x
,
z
1
)
{\displaystyle (\mathbf {x} ,z_{1})}
system also stable. So we must find a new Lyapunov function candidate for this new system. That candidate will depend upon the control u, and by choosing the control properly, we can ensure that it is decaying everywhere as well.
Next, by adding and subtracting
g
x
(
x
)
u
x
(
x
)
{\displaystyle g_{x}(\mathbf {x} )u_{x}(\mathbf {x} )}
(i.e., we don't change the system in any way because we make no net effect) to the
x
˙
{\displaystyle {\dot {\mathbf {x} }}}
part of the larger
(
x
,
z
1
)
{\displaystyle (\mathbf {x} ,z_{1})}
system, it becomes
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
+
(
g
x
(
x
)
u
x
(
x
)
−
g
x
(
x
)
u
x
(
x
)
)
⏟
0
z
˙
1
=
u
1
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}+{\mathord {\underbrace {\left(g_{x}(\mathbf {x} )u_{x}(\mathbf {x} )-g_{x}(\mathbf {x} )u_{x}(\mathbf {x} )\right)} _{0}}}\\{\dot {z}}_{1}=u_{1}\end{cases}}}
which we can re-group to get
{
x
˙
=
(
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
)
⏟
F
(
x
)
+
g
x
(
x
)
(
z
1
−
u
x
(
x
)
)
⏟
z
1
error tracking
u
x
z
˙
1
=
u
1
{\displaystyle {\begin{cases}{\dot {x}}={\mathord {\underbrace {\left(f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} )\right)} _{F(\mathbf {x} )}}}+g_{x}(\mathbf {x} )\underbrace {\left(z_{1}-u_{x}(\mathbf {x} )\right)} _{z_{1}{\text{ error tracking }}u_{x}}\\{\dot {z}}_{1}=u_{1}\end{cases}}}
So our cascaded supersystem encapsulates the known-stable
x
˙
=
F
(
x
)
{\displaystyle {\dot {\mathbf {x} }}=F(\mathbf {x} )}
subsystem plus some error perturbation generated by the integrator.
We now can change variables from
(
x
,
z
1
)
{\displaystyle (\mathbf {x} ,z_{1})}
to
(
x
,
e
1
)
{\displaystyle (\mathbf {x} ,e_{1})}
by letting
e
1
≜
z
1
−
u
x
(
x
)
{\displaystyle e_{1}\triangleq z_{1}-u_{x}(\mathbf {x} )}
. So
{
x
˙
=
(
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
)
+
g
x
(
x
)
e
1
e
˙
1
=
u
1
−
u
˙
x
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=(f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} ))+g_{x}(\mathbf {x} )e_{1}\\{\dot {e}}_{1}=u_{1}-{\dot {u}}_{x}\end{cases}}}
Additionally, we let
v
1
≜
u
1
−
u
˙
x
{\displaystyle v_{1}\triangleq u_{1}-{\dot {u}}_{x}}
so that
u
1
=
v
1
+
u
˙
x
{\displaystyle u_{1}=v_{1}+{\dot {u}}_{x}}
and
{
x
˙
=
(
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
)
+
g
x
(
x
)
e
1
e
˙
1
=
v
1
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=(f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} ))+g_{x}(\mathbf {x} )e_{1}\\{\dot {e}}_{1}=v_{1}\end{cases}}}
We seek to stabilize this error system by feedback through the new control
v
1
{\displaystyle v_{1}}
. By stabilizing the system at
e
1
=
0
{\displaystyle e_{1}=0}
, the state
z
1
{\displaystyle z_{1}}
will track the desired control
u
x
{\displaystyle u_{x}}
which will result in stabilizing the inner x subsystem.
From our existing Lyapunov function
V
x
{\displaystyle V_{x}}
, we define the augmented Lyapunov function candidate
V
1
(
x
,
e
1
)
≜
V
x
(
x
)
+
1
2
e
1
2
{\displaystyle V_{1}(\mathbf {x} ,e_{1})\triangleq V_{x}(\mathbf {x} )+{\frac {1}{2}}e_{1}^{2}}
So
V
˙
1
=
V
˙
x
(
x
)
+
1
2
(
2
e
1
e
˙
1
)
=
V
˙
x
(
x
)
+
e
1
e
˙
1
=
V
˙
x
(
x
)
+
e
1
v
1
⏞
e
˙
1
=
∂
V
x
∂
x
x
˙
⏟
(i.e.,
d
x
d
t
)
⏞
V
˙
x
(i.e.,
d
V
x
d
t
)
+
e
1
v
1
=
∂
V
x
∂
x
(
(
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
)
+
g
x
(
x
)
e
1
)
⏟
x
˙
⏞
V
˙
x
+
e
1
v
1
{\displaystyle {\begin{aligned}{\dot {V}}_{1}&={\dot {V}}_{x}(\mathbf {x} )+{\frac {1}{2}}\left(2e_{1}{\dot {e}}_{1}\right)\\&={\dot {V}}_{x}(\mathbf {x} )+e_{1}{\dot {e}}_{1}\\&={\dot {V}}_{x}(\mathbf {x} )+e_{1}\overbrace {v_{1}} ^{{\dot {e}}_{1}}\\&=\overbrace {{\frac {\partial V_{x}}{\partial \mathbf {x} }}\underbrace {\dot {\mathbf {x} }} _{{\text{(i.e., }}{\frac {\operatorname {d} \mathbf {x} }{\operatorname {d} t}}{\text{)}}}} ^{{\dot {V}}_{x}{\text{ (i.e.,}}{\frac {\operatorname {d} V_{x}}{\operatorname {d} t}}{\text{)}}}+e_{1}v_{1}\\&=\overbrace {{\frac {\partial V_{x}}{\partial \mathbf {x} }}\underbrace {\left((f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} ))+g_{x}(\mathbf {x} )e_{1}\right)} _{\dot {\mathbf {x} }}} ^{{\dot {V}}_{x}}+e_{1}v_{1}\end{aligned}}}
By distributing
∂
V
x
/
∂
x
{\displaystyle \partial V_{x}/\partial \mathbf {x} }
, we see that
V
˙
1
=
∂
V
x
∂
x
(
f
x
(
x
)
+
g
x
(
x
)
u
x
(
x
)
)
⏞
≤
−
W
(
x
)
+
∂
V
x
∂
x
g
x
(
x
)
e
1
+
e
1
v
1
≤
−
W
(
x
)
+
∂
V
x
∂
x
g
x
(
x
)
e
1
+
e
1
v
1
{\displaystyle {\dot {V}}_{1}=\overbrace {{\frac {\partial V_{x}}{\partial \mathbf {x} }}(f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}(\mathbf {x} ))} ^{{}\leq -W(\mathbf {x} )}+{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )e_{1}+e_{1}v_{1}\leq -W(\mathbf {x} )+{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )e_{1}+e_{1}v_{1}}
To ensure that
V
˙
1
≤
−
W
(
x
)
<
0
{\displaystyle {\dot {V}}_{1}\leq -W(\mathbf {x} )<0}
(i.e., to ensure stability of the supersystem), we pick the control law
v
1
=
−
∂
V
x
∂
x
g
x
(
x
)
−
k
1
e
1
{\displaystyle v_{1}=-{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )-k_{1}e_{1}}
with
k
1
>
0
{\displaystyle k_{1}>0}
, and so
V
˙
1
=
−
W
(
x
)
+
∂
V
x
∂
x
g
x
(
x
)
e
1
+
e
1
(
−
∂
V
x
∂
x
g
x
(
x
)
−
k
1
e
1
)
⏞
v
1
{\displaystyle {\dot {V}}_{1}=-W(\mathbf {x} )+{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )e_{1}+e_{1}\overbrace {\left(-{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )-k_{1}e_{1}\right)} ^{v_{1}}}
After distributing the
e
1
{\displaystyle e_{1}}
through,
V
˙
1
=
−
W
(
x
)
+
∂
V
x
∂
x
g
x
(
x
)
e
1
−
e
1
∂
V
x
∂
x
g
x
(
x
)
⏞
0
−
k
1
e
1
2
=
−
W
(
x
)
−
k
1
e
1
2
≤
−
W
(
x
)
<
0
{\displaystyle {\begin{aligned}{\dot {V}}_{1}&=-W(\mathbf {x} )+{\mathord {\overbrace {{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )e_{1}-e_{1}{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )} ^{0}}}-k_{1}e_{1}^{2}\\&=-W(\mathbf {x} )-k_{1}e_{1}^{2}\leq -W(\mathbf {x} )\\&<0\end{aligned}}}
So our candidate Lyapunov function
V
1
{\displaystyle V_{1}}
is a true Lyapunov function, and our system is stable under this control law
v
1
{\displaystyle v_{1}}
(which corresponds the control law
u
1
{\displaystyle u_{1}}
because
v
1
≜
u
1
−
u
˙
x
{\displaystyle v_{1}\triangleq u_{1}-{\dot {u}}_{x}}
). Using the variables from the original coordinate system, the equivalent Lyapunov function
As discussed below, this Lyapunov function will be used again when this procedure is applied iteratively to multiple-integrator problem.
Our choice of control
v
1
{\displaystyle v_{1}}
ultimately depends on all of our original state variables. In particular, the actual feedback-stabilizing control law
The states x and
z
1
{\displaystyle z_{1}}
and functions
f
x
{\displaystyle f_{x}}
and
g
x
{\displaystyle g_{x}}
come from the system. The function
u
x
{\displaystyle u_{x}}
comes from our known-stable
x
˙
=
F
(
x
)
{\displaystyle {\dot {\mathbf {x} }}=F(\mathbf {x} )}
subsystem. The gain parameter
k
1
>
0
{\displaystyle k_{1}>0}
affects the convergence rate or our system. Under this control law, our system is stable at the origin
(
x
,
z
1
)
=
(
0
,
0
)
{\displaystyle (\mathbf {x} ,z_{1})=(\mathbf {0} ,0)}
.
Recall that
u
1
{\displaystyle u_{1}}
in Equation (3) drives the input of an integrator that is connected to a subsystem that is feedback-stabilized by the control law
u
x
{\displaystyle u_{x}}
. Not surprisingly, the control
u
1
{\displaystyle u_{1}}
has a
u
˙
x
{\displaystyle {\dot {u}}_{x}}
term that will be integrated to follow the stabilizing control law
u
˙
x
{\displaystyle {\dot {u}}_{x}}
plus some offset. The other terms provide damping to remove that offset and any other perturbation effects that would be magnified by the integrator.
So because this system is feedback stabilized by
u
1
(
x
,
z
1
)
{\displaystyle u_{1}(\mathbf {x} ,z_{1})}
and has Lyapunov function
V
1
(
x
,
z
1
)
{\displaystyle V_{1}(\mathbf {x} ,z_{1})}
with
V
˙
1
(
x
,
z
1
)
≤
−
W
(
x
)
<
0
{\displaystyle {\dot {V}}_{1}(\mathbf {x} ,z_{1})\leq -W(\mathbf {x} )<0}
, it can be used as the upper subsystem in another single-integrator cascade system.
= Motivating Example: Two-integrator Backstepping
=Before discussing the recursive procedure for the general multiple-integrator case, it is instructive to study the recursion present in the two-integrator case. That is, consider the dynamical system
where
x
∈
R
n
{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}
and
z
1
{\displaystyle z_{1}}
and
z
2
{\displaystyle z_{2}}
are scalars. This system is a cascade connection of the single-integrator system in Equation (1) with another integrator (i.e., the input
u
2
{\displaystyle u_{2}}
enters through an integrator, and the output of that integrator enters the system in Equation (1) by its
u
1
{\displaystyle u_{1}}
input).
By letting
y
≜
[
x
z
1
]
{\displaystyle \mathbf {y} \triangleq {\begin{bmatrix}\mathbf {x} \\z_{1}\end{bmatrix}}\,}
,
f
y
(
y
)
≜
[
f
x
(
x
)
+
g
x
(
x
)
z
1
0
]
{\displaystyle f_{y}(\mathbf {y} )\triangleq {\begin{bmatrix}f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\0\end{bmatrix}}\,}
,
g
y
(
y
)
≜
[
0
1
]
,
{\displaystyle g_{y}(\mathbf {y} )\triangleq {\begin{bmatrix}\mathbf {0} \\1\end{bmatrix}},\,}
then the two-integrator system in Equation (4) becomes the single-integrator system
By the single-integrator procedure, the control law
u
y
(
y
)
≜
u
1
(
x
,
z
1
)
{\displaystyle u_{y}(\mathbf {y} )\triangleq u_{1}(\mathbf {x} ,z_{1})}
stabilizes the upper
z
2
{\displaystyle z_{2}}
-to-y subsystem using the Lyapunov function
V
1
(
x
,
z
1
)
{\displaystyle V_{1}(\mathbf {x} ,z_{1})}
, and so Equation (5) is a new single-integrator system that is structurally equivalent to the single-integrator system in Equation (1). So a stabilizing control
u
2
{\displaystyle u_{2}}
can be found using the same single-integrator procedure that was used to find
u
1
{\displaystyle u_{1}}
.
= Many-integrator backstepping
=In the two-integrator case, the upper single-integrator subsystem was stabilized yielding a new single-integrator system that can be similarly stabilized. This recursive procedure can be extended to handle any finite number of integrators. This claim can be formally proved with mathematical induction. Here, a stabilized multiple-integrator system is built up from subsystems of already-stabilized multiple-integrator subsystems.
First, consider the dynamical system
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
u
x
{\displaystyle {\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )u_{x}}
that has scalar input
u
x
{\displaystyle u_{x}}
and output states
x
=
[
x
1
,
x
2
,
…
,
x
n
]
T
∈
R
n
{\displaystyle \mathbf {x} =[x_{1},x_{2},\ldots ,x_{n}]^{\text{T}}\in \mathbb {R} ^{n}}
. Assume that
f
x
(
x
)
=
0
{\displaystyle f_{x}(\mathbf {x} )=\mathbf {0} }
so that the zero-input (i.e.,
u
x
=
0
{\displaystyle u_{x}=0}
) system is stationary at the origin
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
. In this case, the origin is called an equilibrium of the system.
The feedback control law
u
x
(
x
)
{\displaystyle u_{x}(\mathbf {x} )}
stabilizes the system at the equilibrium at the origin.
A Lyapunov function corresponding to this system is described by
V
x
(
x
)
{\displaystyle V_{x}(\mathbf {x} )}
.
That is, if output states x are fed back to the input
u
x
{\displaystyle u_{x}}
by the control law
u
x
(
x
)
{\displaystyle u_{x}(\mathbf {x} )}
, then the output states (and the Lyapunov function) return to the origin after a single perturbation (e.g., after a nonzero initial condition or a sharp disturbance). This subsystem is stabilized by feedback control law
u
x
{\displaystyle u_{x}}
.
Next, connect an integrator to input
u
x
{\displaystyle u_{x}}
so that the augmented system has input
u
1
{\displaystyle u_{1}}
(to the integrator) and output states x. The resulting augmented dynamical system is
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
z
˙
1
=
u
1
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\{\dot {z}}_{1}=u_{1}\end{cases}}}
This "cascade" system matches the form in Equation (1), and so the single-integrator backstepping procedure leads to the stabilizing control law in Equation (3). That is, if we feed back states
z
1
{\displaystyle z_{1}}
and x to input
u
1
{\displaystyle u_{1}}
according to the control law
u
1
(
x
,
z
1
)
=
−
∂
V
x
∂
x
g
x
(
x
)
−
k
1
(
z
1
−
u
x
(
x
)
)
+
∂
u
x
∂
x
(
f
x
(
x
)
+
g
x
(
x
)
z
1
)
{\displaystyle u_{1}(\mathbf {x} ,z_{1})=-{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )-k_{1}(z_{1}-u_{x}(\mathbf {x} ))+{\frac {\partial u_{x}}{\partial \mathbf {x} }}(f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1})}
with gain
k
1
>
0
{\displaystyle k_{1}>0}
, then the states
z
1
{\displaystyle z_{1}}
and x will return to
z
1
=
0
{\displaystyle z_{1}=0}
and
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
after a single perturbation. This subsystem is stabilized by feedback control law
u
1
{\displaystyle u_{1}}
, and the corresponding Lyapunov function from Equation (2) is
V
1
(
x
,
z
1
)
=
V
x
(
x
)
+
1
2
(
z
1
−
u
x
(
x
)
)
2
{\displaystyle V_{1}(\mathbf {x} ,z_{1})=V_{x}(\mathbf {x} )+{\frac {1}{2}}(z_{1}-u_{x}(\mathbf {x} ))^{2}}
That is, under feedback control law
u
1
{\displaystyle u_{1}}
, the Lyapunov function
V
1
{\displaystyle V_{1}}
decays to zero as the states return to the origin.
Connect a new integrator to input
u
1
{\displaystyle u_{1}}
so that the augmented system has input
u
2
{\displaystyle u_{2}}
and output states x. The resulting augmented dynamical system is
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
z
˙
1
=
z
2
z
˙
2
=
u
2
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\{\dot {z}}_{1}=z_{2}\\{\dot {z}}_{2}=u_{2}\end{cases}}}
which is equivalent to the single-integrator system
{
[
x
˙
z
˙
1
]
⏞
≜
x
˙
1
=
[
f
x
(
x
)
+
g
x
(
x
)
z
1
0
]
⏞
≜
f
1
(
x
1
)
+
[
0
1
]
⏞
≜
g
1
(
x
1
)
z
2
( by Lyapunov function
V
1
,
subsystem stabilized by
u
1
(
x
1
)
)
z
˙
2
=
u
2
{\displaystyle {\begin{cases}\overbrace {\begin{bmatrix}{\dot {\mathbf {x} }}\\{\dot {z}}_{1}\end{bmatrix}} ^{\triangleq \,{\dot {\mathbf {x} }}_{1}}=\overbrace {\begin{bmatrix}f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\0\end{bmatrix}} ^{\triangleq \,f_{1}(\mathbf {x} _{1})}+\overbrace {\begin{bmatrix}\mathbf {0} \\1\end{bmatrix}} ^{\triangleq \,g_{1}(\mathbf {x} _{1})}z_{2}&\qquad {\text{ ( by Lyapunov function }}V_{1},{\text{ subsystem stabilized by }}u_{1}({\textbf {x}}_{1}){\text{ )}}\\{\dot {z}}_{2}=u_{2}\end{cases}}}
Using these definitions of
x
1
{\displaystyle \mathbf {x} _{1}}
,
f
1
{\displaystyle f_{1}}
, and
g
1
{\displaystyle g_{1}}
, this system can also be expressed as
{
x
˙
1
=
f
1
(
x
1
)
+
g
1
(
x
1
)
z
2
( by Lyapunov function
V
1
,
subsystem stabilized by
u
1
(
x
1
)
)
z
˙
2
=
u
2
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}_{1}=f_{1}(\mathbf {x} _{1})+g_{1}(\mathbf {x} _{1})z_{2}&\qquad {\text{ ( by Lyapunov function }}V_{1},{\text{ subsystem stabilized by }}u_{1}({\textbf {x}}_{1}){\text{ )}}\\{\dot {z}}_{2}=u_{2}\end{cases}}}
This system matches the single-integrator structure of Equation (1), and so the single-integrator backstepping procedure can be applied again. That is, if we feed back states
z
1
{\displaystyle z_{1}}
,
z
2
{\displaystyle z_{2}}
, and x to input
u
2
{\displaystyle u_{2}}
according to the control law
u
2
(
x
,
z
1
,
z
2
)
=
−
∂
V
1
∂
x
1
g
1
(
x
1
)
−
k
2
(
z
2
−
u
1
(
x
1
)
)
+
∂
u
1
∂
x
1
(
f
1
(
x
1
)
+
g
1
(
x
1
)
z
2
)
{\displaystyle u_{2}(\mathbf {x} ,z_{1},z_{2})=-{\frac {\partial V_{1}}{\partial \mathbf {x} _{1}}}g_{1}(\mathbf {x} _{1})-k_{2}(z_{2}-u_{1}(\mathbf {x} _{1}))+{\frac {\partial u_{1}}{\partial \mathbf {x} _{1}}}(f_{1}(\mathbf {x} _{1})+g_{1}(\mathbf {x} _{1})z_{2})}
with gain
k
2
>
0
{\displaystyle k_{2}>0}
, then the states
z
1
{\displaystyle z_{1}}
,
z
2
{\displaystyle z_{2}}
, and x will return to
z
1
=
0
{\displaystyle z_{1}=0}
,
z
2
=
0
{\displaystyle z_{2}=0}
, and
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
after a single perturbation. This subsystem is stabilized by feedback control law
u
2
{\displaystyle u_{2}}
, and the corresponding Lyapunov function is
V
2
(
x
,
z
1
,
z
2
)
=
V
1
(
x
1
)
+
1
2
(
z
2
−
u
1
(
x
1
)
)
2
{\displaystyle V_{2}(\mathbf {x} ,z_{1},z_{2})=V_{1}(\mathbf {x} _{1})+{\frac {1}{2}}(z_{2}-u_{1}(\mathbf {x} _{1}))^{2}}
That is, under feedback control law
u
2
{\displaystyle u_{2}}
, the Lyapunov function
V
2
{\displaystyle V_{2}}
decays to zero as the states return to the origin.
Connect an integrator to input
u
2
{\displaystyle u_{2}}
so that the augmented system has input
u
3
{\displaystyle u_{3}}
and output states x. The resulting augmented dynamical system is
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
z
˙
1
=
z
2
z
˙
2
=
z
3
z
˙
3
=
u
3
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\{\dot {z}}_{1}=z_{2}\\{\dot {z}}_{2}=z_{3}\\{\dot {z}}_{3}=u_{3}\end{cases}}}
which can be re-grouped as the single-integrator system
{
[
x
˙
z
˙
1
z
˙
2
]
⏞
≜
x
˙
2
=
[
f
x
(
x
)
+
g
x
(
x
)
z
2
z
2
0
]
⏞
≜
f
2
(
x
2
)
+
[
0
0
1
]
⏞
≜
g
2
(
x
2
)
z
3
( by Lyapunov function
V
2
,
subsystem stabilized by
u
2
(
x
2
)
)
z
˙
3
=
u
3
{\displaystyle {\begin{cases}\overbrace {\begin{bmatrix}{\dot {\mathbf {x} }}\\{\dot {z}}_{1}\\{\dot {z}}_{2}\end{bmatrix}} ^{\triangleq \,{\dot {\mathbf {x} }}_{2}}=\overbrace {\begin{bmatrix}f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{2}\\z_{2}\\0\end{bmatrix}} ^{\triangleq \,f_{2}(\mathbf {x} _{2})}+\overbrace {\begin{bmatrix}\mathbf {0} \\0\\1\end{bmatrix}} ^{\triangleq \,g_{2}(\mathbf {x} _{2})}z_{3}&\qquad {\text{ ( by Lyapunov function }}V_{2},{\text{ subsystem stabilized by }}u_{2}({\textbf {x}}_{2}){\text{ )}}\\{\dot {z}}_{3}=u_{3}\end{cases}}}
By the definitions of
x
1
{\displaystyle \mathbf {x} _{1}}
,
f
1
{\displaystyle f_{1}}
, and
g
1
{\displaystyle g_{1}}
from the previous step, this system is also represented by
{
[
x
˙
1
z
˙
2
]
⏞
x
˙
2
=
[
f
1
(
x
1
)
+
g
1
(
x
1
)
z
2
0
]
⏞
f
2
(
x
2
)
+
[
0
1
]
⏞
g
2
(
x
2
)
z
3
( by Lyapunov function
V
2
,
subsystem stabilized by
u
2
(
x
2
)
)
z
˙
3
=
u
3
{\displaystyle {\begin{cases}\overbrace {\begin{bmatrix}{\dot {\mathbf {x} }}_{1}\\{\dot {z}}_{2}\end{bmatrix}} ^{{\dot {\mathbf {x} }}_{2}}=\overbrace {\begin{bmatrix}f_{1}(\mathbf {x} _{1})+g_{1}(\mathbf {x} _{1})z_{2}\\0\end{bmatrix}} ^{f_{2}(\mathbf {x} _{2})}+\overbrace {\begin{bmatrix}\mathbf {0} \\1\end{bmatrix}} ^{g_{2}(\mathbf {x} _{2})}z_{3}&\qquad {\text{ ( by Lyapunov function }}V_{2},{\text{ subsystem stabilized by }}u_{2}({\textbf {x}}_{2}){\text{ )}}\\{\dot {z}}_{3}=u_{3}\end{cases}}}
Further, using these definitions of
x
2
{\displaystyle \mathbf {x} _{2}}
,
f
2
{\displaystyle f_{2}}
, and
g
2
{\displaystyle g_{2}}
, this system can also be expressed as
{
x
˙
2
=
f
2
(
x
2
)
+
g
2
(
x
2
)
z
3
( by Lyapunov function
V
2
,
subsystem stabilized by
u
2
(
x
2
)
)
z
˙
3
=
u
3
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}_{2}=f_{2}(\mathbf {x} _{2})+g_{2}(\mathbf {x} _{2})z_{3}&\qquad {\text{ ( by Lyapunov function }}V_{2},{\text{ subsystem stabilized by }}u_{2}({\textbf {x}}_{2}){\text{ )}}\\{\dot {z}}_{3}=u_{3}\end{cases}}}
So the re-grouped system has the single-integrator structure of Equation (1), and so the single-integrator backstepping procedure can be applied again. That is, if we feed back states
z
1
{\displaystyle z_{1}}
,
z
2
{\displaystyle z_{2}}
,
z
3
{\displaystyle z_{3}}
, and x to input
u
3
{\displaystyle u_{3}}
according to the control law
u
3
(
x
,
z
1
,
z
2
,
z
3
)
=
−
∂
V
2
∂
x
2
g
2
(
x
2
)
−
k
3
(
z
3
−
u
2
(
x
2
)
)
+
∂
u
2
∂
x
2
(
f
2
(
x
2
)
+
g
2
(
x
2
)
z
3
)
{\displaystyle u_{3}(\mathbf {x} ,z_{1},z_{2},z_{3})=-{\frac {\partial V_{2}}{\partial \mathbf {x} _{2}}}g_{2}(\mathbf {x} _{2})-k_{3}(z_{3}-u_{2}(\mathbf {x} _{2}))+{\frac {\partial u_{2}}{\partial \mathbf {x} _{2}}}(f_{2}(\mathbf {x} _{2})+g_{2}(\mathbf {x} _{2})z_{3})}
with gain
k
3
>
0
{\displaystyle k_{3}>0}
, then the states
z
1
{\displaystyle z_{1}}
,
z
2
{\displaystyle z_{2}}
,
z
3
{\displaystyle z_{3}}
, and x will return to
z
1
=
0
{\displaystyle z_{1}=0}
,
z
2
=
0
{\displaystyle z_{2}=0}
,
z
3
=
0
{\displaystyle z_{3}=0}
, and
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} \,}
after a single perturbation. This subsystem is stabilized by feedback control law
u
3
{\displaystyle u_{3}}
, and the corresponding Lyapunov function is
V
3
(
x
,
z
1
,
z
2
,
z
3
)
=
V
2
(
x
2
)
+
1
2
(
z
3
−
u
2
(
x
2
)
)
2
{\displaystyle V_{3}(\mathbf {x} ,z_{1},z_{2},z_{3})=V_{2}(\mathbf {x} _{2})+{\frac {1}{2}}(z_{3}-u_{2}(\mathbf {x} _{2}))^{2}}
That is, under feedback control law
u
3
{\displaystyle u_{3}}
, the Lyapunov function
V
3
{\displaystyle V_{3}}
decays to zero as the states return to the origin.
This process can continue for each integrator added to the system, and hence any system of the form
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
( by Lyapunov function
V
x
,
subsystem stabilized by
u
x
(
x
)
)
z
˙
1
=
z
2
z
˙
2
=
z
3
⋮
z
˙
i
=
z
i
+
1
⋮
z
˙
k
−
2
=
z
k
−
1
z
˙
k
−
1
=
z
k
z
˙
k
=
u
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}&\qquad {\text{ ( by Lyapunov function }}V_{x},{\text{ subsystem stabilized by }}u_{x}({\textbf {x}}){\text{ )}}\\{\dot {z}}_{1}=z_{2}\\{\dot {z}}_{2}=z_{3}\\\vdots \\{\dot {z}}_{i}=z_{i+1}\\\vdots \\{\dot {z}}_{k-2}=z_{k-1}\\{\dot {z}}_{k-1}=z_{k}\\{\dot {z}}_{k}=u\end{cases}}}
has the recursive structure
{
{
{
{
{
{
{
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
( by Lyapunov function
V
x
,
subsystem stabilized by
u
x
(
x
)
)
z
˙
1
=
z
2
z
˙
2
=
z
3
⋮
z
˙
i
=
z
i
+
1
⋮
z
˙
k
−
2
=
z
k
−
1
z
˙
k
−
1
=
z
k
z
˙
k
=
u
{\displaystyle {\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}&\qquad {\text{ ( by Lyapunov function }}V_{x},{\text{ subsystem stabilized by }}u_{x}({\textbf {x}}){\text{ )}}\\{\dot {z}}_{1}=z_{2}\end{cases}}\\{\dot {z}}_{2}=z_{3}\end{cases}}\\\vdots \end{cases}}\\{\dot {z}}_{i}=z_{i+1}\end{cases}}\\\vdots \end{cases}}\\{\dot {z}}_{k-2}=z_{k-1}\end{cases}}\\{\dot {z}}_{k-1}=z_{k}\end{cases}}\\{\dot {z}}_{k}=u\end{cases}}}
and can be feedback stabilized by finding the feedback-stabilizing control and Lyapunov function for the single-integrator
(
x
,
z
1
)
{\displaystyle (\mathbf {x} ,z_{1})}
subsystem (i.e., with input
z
2
{\displaystyle z_{2}}
and output x) and iterating out from that inner subsystem until the ultimate feedback-stabilizing control u is known. At iteration i, the equivalent system is
{
[
x
˙
z
˙
1
z
˙
2
⋮
z
˙
i
−
2
z
˙
i
−
1
]
⏞
≜
x
˙
i
−
1
=
[
f
i
−
2
(
x
i
−
2
)
+
g
i
−
2
(
x
i
−
1
)
z
i
−
2
0
]
⏞
≜
f
i
−
1
(
x
i
−
1
)
+
[
0
1
]
⏞
≜
g
i
−
1
(
x
i
−
1
)
z
i
( by Lyap. func.
V
i
−
1
,
subsystem stabilized by
u
i
−
1
(
x
i
−
1
)
)
z
˙
i
=
u
i
{\displaystyle {\begin{cases}\overbrace {\begin{bmatrix}{\dot {\mathbf {x} }}\\{\dot {z}}_{1}\\{\dot {z}}_{2}\\\vdots \\{\dot {z}}_{i-2}\\{\dot {z}}_{i-1}\end{bmatrix}} ^{\triangleq \,{\dot {\mathbf {x} }}_{i-1}}=\overbrace {\begin{bmatrix}f_{i-2}(\mathbf {x} _{i-2})+g_{i-2}(\mathbf {x} _{i-1})z_{i-2}\\0\end{bmatrix}} ^{\triangleq \,f_{i-1}(\mathbf {x} _{i-1})}+\overbrace {\begin{bmatrix}\mathbf {0} \\1\end{bmatrix}} ^{\triangleq \,g_{i-1}(\mathbf {x} _{i-1})}z_{i}&\quad {\text{ ( by Lyap. func. }}V_{i-1},{\text{ subsystem stabilized by }}u_{i-1}({\textbf {x}}_{i-1}){\text{ )}}\\{\dot {z}}_{i}=u_{i}\end{cases}}}
The corresponding feedback-stabilizing control law is
u
i
(
x
,
z
1
,
z
2
,
…
,
z
i
⏞
≜
x
i
)
=
−
∂
V
i
−
1
∂
x
i
−
1
g
i
−
1
(
x
i
−
1
)
−
k
i
(
z
i
−
u
i
−
1
(
x
i
−
1
)
)
+
∂
u
i
−
1
∂
x
i
−
1
(
f
i
−
1
(
x
i
−
1
)
+
g
i
−
1
(
x
i
−
1
)
z
i
)
{\displaystyle u_{i}(\overbrace {\mathbf {x} ,z_{1},z_{2},\dots ,z_{i}} ^{\triangleq \,\mathbf {x} _{i}})=-{\frac {\partial V_{i-1}}{\partial \mathbf {x} _{i-1}}}g_{i-1}(\mathbf {x} _{i-1})\,-\,k_{i}(z_{i}\,-\,u_{i-1}(\mathbf {x} _{i-1}))\,+\,{\frac {\partial u_{i-1}}{\partial \mathbf {x} _{i-1}}}(f_{i-1}(\mathbf {x} _{i-1})\,+\,g_{i-1}(\mathbf {x} _{i-1})z_{i})}
with gain
k
i
>
0
{\displaystyle k_{i}>0}
. The corresponding Lyapunov function is
V
i
(
x
i
)
=
V
i
−
1
(
x
i
−
1
)
+
1
2
(
z
i
−
u
i
−
1
(
x
i
−
1
)
)
2
{\displaystyle V_{i}(\mathbf {x} _{i})=V_{i-1}(\mathbf {x} _{i-1})+{\frac {1}{2}}(z_{i}-u_{i-1}(\mathbf {x} _{i-1}))^{2}}
By this construction, the ultimate control
u
(
x
,
z
1
,
z
2
,
…
,
z
k
)
=
u
k
(
x
k
)
{\displaystyle u(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k})=u_{k}(\mathbf {x} _{k})}
(i.e., ultimate control is found at final iteration
i
=
k
{\displaystyle i=k}
).
Hence, any system in this special many-integrator strict-feedback form can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an adaptive control algorithm).
Generic Backstepping
Systems in the special strict-feedback form have a recursive structure similar to the many-integrator system structure. Likewise, they are stabilized by stabilizing the smallest cascaded system and then backstepping to the next cascaded system and repeating the procedure. So it is critical to develop a single-step procedure; that procedure can be recursively applied to cover the many-step case. Fortunately, due to the requirements on the functions in the strict-feedback form, each single-step system can be rendered by feedback to a single-integrator system, and that single-integrator system can be stabilized using methods discussed above.
= Single-step Procedure
=Consider the simple strict-feedback system
where
x
=
[
x
1
,
x
2
,
…
,
x
n
]
T
∈
R
n
{\displaystyle \mathbf {x} =[x_{1},x_{2},\ldots ,x_{n}]^{\text{T}}\in \mathbb {R} ^{n}}
,
z
1
{\displaystyle z_{1}}
and
u
1
{\displaystyle u_{1}}
are scalars,
For all x and
z
1
{\displaystyle z_{1}}
,
g
1
(
x
,
z
1
)
≠
0
{\displaystyle g_{1}(\mathbf {x} ,z_{1})\neq 0}
.
Rather than designing feedback-stabilizing control
u
1
{\displaystyle u_{1}}
directly, introduce a new control
u
a
1
{\displaystyle u_{a1}}
(to be designed later) and use control law
u
1
(
x
,
z
1
)
=
1
g
1
(
x
,
z
1
)
(
u
a
1
−
f
1
(
x
,
z
1
)
)
{\displaystyle u_{1}(\mathbf {x} ,z_{1})={\frac {1}{g_{1}(\mathbf {x} ,z_{1})}}\left(u_{a1}-f_{1}(\mathbf {x} ,z_{1})\right)}
which is possible because
g
1
≠
0
{\displaystyle g_{1}\neq 0}
. So the system in Equation (6) is
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
z
˙
1
=
f
1
(
x
,
z
1
)
+
g
1
(
x
,
z
1
)
1
g
1
(
x
,
z
1
)
(
u
a
1
−
f
1
(
x
,
z
1
)
)
⏞
u
1
(
x
,
z
1
)
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\{\dot {z}}_{1}=f_{1}(\mathbf {x} ,z_{1})+g_{1}(\mathbf {x} ,z_{1})\overbrace {{\frac {1}{g_{1}(\mathbf {x} ,z_{1})}}\left(u_{a1}-f_{1}(\mathbf {x} ,z_{1})\right)} ^{u_{1}(\mathbf {x} ,z_{1})}\end{cases}}}
which simplifies to
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
z
˙
1
=
u
a
1
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}\\{\dot {z}}_{1}=u_{a1}\end{cases}}}
This new
u
a
1
{\displaystyle u_{a1}}
-to-x system matches the single-integrator cascade system in Equation (1). Assuming that a feedback-stabilizing control law
u
x
(
x
)
{\displaystyle u_{x}(\mathbf {x} )}
and Lyapunov function
V
x
(
x
)
{\displaystyle V_{x}(\mathbf {x} )}
for the upper subsystem is known, the feedback-stabilizing control law from Equation (3) is
u
a
1
(
x
,
z
1
)
=
−
∂
V
x
∂
x
g
x
(
x
)
−
k
1
(
z
1
−
u
x
(
x
)
)
+
∂
u
x
∂
x
(
f
x
(
x
)
+
g
x
(
x
)
z
1
)
{\displaystyle u_{a1}(\mathbf {x} ,z_{1})=-{\frac {\partial V_{x}}{\partial \mathbf {x} }}g_{x}(\mathbf {x} )-k_{1}(z_{1}-u_{x}(\mathbf {x} ))+{\frac {\partial u_{x}}{\partial \mathbf {x} }}(f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1})}
with gain
k
1
>
0
{\displaystyle k_{1}>0}
. So the final feedback-stabilizing control law is
with gain
k
1
>
0
{\displaystyle k_{1}>0}
. The corresponding Lyapunov function from Equation (2) is
Because this strict-feedback system has a feedback-stabilizing control and a corresponding Lyapunov function, it can be cascaded as part of a larger strict-feedback system, and this procedure can be repeated to find the surrounding feedback-stabilizing control.
= Many-step Procedure
=As in many-integrator backstepping, the single-step procedure can be completed iteratively to stabilize an entire strict-feedback system. In each step,
The smallest "unstabilized" single-step strict-feedback system is isolated.
Feedback is used to convert the system into a single-integrator system.
The resulting single-integrator system is stabilized.
The stabilized system is used as the upper system in the next step.
That is, any strict-feedback system
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
( by Lyapunov function
V
x
,
subsystem stabilized by
u
x
(
x
)
)
z
˙
1
=
f
1
(
x
,
z
1
)
+
g
1
(
x
,
z
1
)
z
2
z
˙
2
=
f
2
(
x
,
z
1
,
z
2
)
+
g
2
(
x
,
z
1
,
z
2
)
z
3
⋮
z
˙
i
=
f
i
(
x
,
z
1
,
z
2
,
…
,
z
i
)
+
g
i
(
x
,
z
1
,
z
2
,
…
,
z
i
)
z
i
+
1
⋮
z
˙
k
−
2
=
f
k
−
2
(
x
,
z
1
,
z
2
,
…
z
k
−
2
)
+
g
k
−
2
(
x
,
z
1
,
z
2
,
…
,
z
k
−
2
)
z
k
−
1
z
˙
k
−
1
=
f
k
−
1
(
x
,
z
1
,
z
2
,
…
z
k
−
2
,
z
k
−
1
)
+
g
k
−
1
(
x
,
z
1
,
z
2
,
…
,
z
k
−
2
,
z
k
−
1
)
z
k
z
˙
k
=
f
k
(
x
,
z
1
,
z
2
,
…
z
k
−
1
,
z
k
)
+
g
k
(
x
,
z
1
,
z
2
,
…
,
z
k
−
1
,
z
k
)
u
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}&\qquad {\text{ ( by Lyapunov function }}V_{x},{\text{ subsystem stabilized by }}u_{x}({\textbf {x}}){\text{ )}}\\{\dot {z}}_{1}=f_{1}(\mathbf {x} ,z_{1})+g_{1}(\mathbf {x} ,z_{1})z_{2}\\{\dot {z}}_{2}=f_{2}(\mathbf {x} ,z_{1},z_{2})+g_{2}(\mathbf {x} ,z_{1},z_{2})z_{3}\\\vdots \\{\dot {z}}_{i}=f_{i}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{i})+g_{i}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{i})z_{i+1}\\\vdots \\{\dot {z}}_{k-2}=f_{k-2}(\mathbf {x} ,z_{1},z_{2},\ldots z_{k-2})+g_{k-2}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k-2})z_{k-1}\\{\dot {z}}_{k-1}=f_{k-1}(\mathbf {x} ,z_{1},z_{2},\ldots z_{k-2},z_{k-1})+g_{k-1}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k-2},z_{k-1})z_{k}\\{\dot {z}}_{k}=f_{k}(\mathbf {x} ,z_{1},z_{2},\ldots z_{k-1},z_{k})+g_{k}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k-1},z_{k})u\end{cases}}}
has the recursive structure
{
{
{
{
{
{
{
{
x
˙
=
f
x
(
x
)
+
g
x
(
x
)
z
1
( by Lyapunov function
V
x
,
subsystem stabilized by
u
x
(
x
)
)
z
˙
1
=
f
1
(
x
,
z
1
)
+
g
1
(
x
,
z
1
)
z
2
z
˙
2
=
f
2
(
x
,
z
1
,
z
2
)
+
g
2
(
x
,
z
1
,
z
2
)
z
3
⋮
z
˙
i
=
f
i
(
x
,
z
1
,
z
2
,
…
,
z
i
)
+
g
i
(
x
,
z
1
,
z
2
,
…
,
z
i
)
z
i
+
1
⋮
z
˙
k
−
2
=
f
k
−
2
(
x
,
z
1
,
z
2
,
…
z
k
−
2
)
+
g
k
−
2
(
x
,
z
1
,
z
2
,
…
,
z
k
−
2
)
z
k
−
1
z
˙
k
−
1
=
f
k
−
1
(
x
,
z
1
,
z
2
,
…
z
k
−
2
,
z
k
−
1
)
+
g
k
−
1
(
x
,
z
1
,
z
2
,
…
,
z
k
−
2
,
z
k
−
1
)
z
k
z
˙
k
=
f
k
(
x
,
z
1
,
z
2
,
…
z
k
−
1
,
z
k
)
+
g
k
(
x
,
z
1
,
z
2
,
…
,
z
k
−
1
,
z
k
)
u
{\displaystyle {\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\begin{cases}{\dot {\mathbf {x} }}=f_{x}(\mathbf {x} )+g_{x}(\mathbf {x} )z_{1}&\qquad {\text{ ( by Lyapunov function }}V_{x},{\text{ subsystem stabilized by }}u_{x}({\textbf {x}}){\text{ )}}\\{\dot {z}}_{1}=f_{1}(\mathbf {x} ,z_{1})+g_{1}(\mathbf {x} ,z_{1})z_{2}\end{cases}}\\{\dot {z}}_{2}=f_{2}(\mathbf {x} ,z_{1},z_{2})+g_{2}(\mathbf {x} ,z_{1},z_{2})z_{3}\end{cases}}\\\vdots \\\end{cases}}\\{\dot {z}}_{i}=f_{i}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{i})+g_{i}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{i})z_{i+1}\end{cases}}\\\vdots \end{cases}}\\{\dot {z}}_{k-2}=f_{k-2}(\mathbf {x} ,z_{1},z_{2},\ldots z_{k-2})+g_{k-2}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k-2})z_{k-1}\end{cases}}\\{\dot {z}}_{k-1}=f_{k-1}(\mathbf {x} ,z_{1},z_{2},\ldots z_{k-2},z_{k-1})+g_{k-1}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k-2},z_{k-1})z_{k}\end{cases}}\\{\dot {z}}_{k}=f_{k}(\mathbf {x} ,z_{1},z_{2},\ldots z_{k-1},z_{k})+g_{k}(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k-1},z_{k})u\end{cases}}}
and can be feedback stabilized by finding the feedback-stabilizing control and Lyapunov function for the single-integrator
(
x
,
z
1
)
{\displaystyle (\mathbf {x} ,z_{1})}
subsystem (i.e., with input
z
2
{\displaystyle z_{2}}
and output x) and iterating out from that inner subsystem until the ultimate feedback-stabilizing control u is known. At iteration i, the equivalent system is
{
[
x
˙
z
˙
1
z
˙
2
⋮
z
˙
i
−
2
z
˙
i
−
1
]
⏞
≜
x
˙
i
−
1
=
[
f
i
−
2
(
x
i
−
2
)
+
g
i
−
2
(
x
i
−
2
)
z
i
−
2
f
i
−
1
(
x
i
)
]
⏞
≜
f
i
−
1
(
x
i
−
1
)
+
[
0
g
i
−
1
(
x
i
)
]
⏞
≜
g
i
−
1
(
x
i
−
1
)
z
i
( by Lyap. func.
V
i
−
1
,
subsystem stabilized by
u
i
−
1
(
x
i
−
1
)
)
z
˙
i
=
f
i
(
x
i
)
+
g
i
(
x
i
)
u
i
{\displaystyle {\begin{cases}\overbrace {\begin{bmatrix}{\dot {\mathbf {x} }}\\{\dot {z}}_{1}\\{\dot {z}}_{2}\\\vdots \\{\dot {z}}_{i-2}\\{\dot {z}}_{i-1}\end{bmatrix}} ^{\triangleq \,{\dot {\mathbf {x} }}_{i-1}}=\overbrace {\begin{bmatrix}f_{i-2}(\mathbf {x} _{i-2})+g_{i-2}(\mathbf {x} _{i-2})z_{i-2}\\f_{i-1}(\mathbf {x} _{i})\end{bmatrix}} ^{\triangleq \,f_{i-1}(\mathbf {x} _{i-1})}+\overbrace {\begin{bmatrix}\mathbf {0} \\g_{i-1}(\mathbf {x} _{i})\end{bmatrix}} ^{\triangleq \,g_{i-1}(\mathbf {x} _{i-1})}z_{i}&\quad {\text{ ( by Lyap. func. }}V_{i-1},{\text{ subsystem stabilized by }}u_{i-1}({\textbf {x}}_{i-1}){\text{ )}}\\{\dot {z}}_{i}=f_{i}(\mathbf {x} _{i})+g_{i}(\mathbf {x} _{i})u_{i}\end{cases}}}
By Equation (7), the corresponding feedback-stabilizing control law is
u
i
(
x
,
z
1
,
z
2
,
…
,
z
i
⏞
≜
x
i
)
=
1
g
i
(
x
i
)
(
−
∂
V
i
−
1
∂
x
i
−
1
g
i
−
1
(
x
i
−
1
)
−
k
i
(
z
i
−
u
i
−
1
(
x
i
−
1
)
)
+
∂
u
i
−
1
∂
x
i
−
1
(
f
i
−
1
(
x
i
−
1
)
+
g
i
−
1
(
x
i
−
1
)
z
i
)
⏞
Single-integrator stabilizing control
u
a
i
(
x
i
)
−
f
i
(
x
i
−
1
)
)
{\displaystyle u_{i}(\overbrace {\mathbf {x} ,z_{1},z_{2},\dots ,z_{i}} ^{\triangleq \,\mathbf {x} _{i}})={\frac {1}{g_{i}(\mathbf {x} _{i})}}\left(\overbrace {-{\frac {\partial V_{i-1}}{\partial \mathbf {x} _{i-1}}}g_{i-1}(\mathbf {x} _{i-1})\,-\,k_{i}\left(z_{i}\,-\,u_{i-1}(\mathbf {x} _{i-1})\right)\,+\,{\frac {\partial u_{i-1}}{\partial \mathbf {x} _{i-1}}}(f_{i-1}(\mathbf {x} _{i-1})\,+\,g_{i-1}(\mathbf {x} _{i-1})z_{i})} ^{{\text{Single-integrator stabilizing control }}u_{a\;\!i}(\mathbf {x} _{i})}\,-\,f_{i}(\mathbf {x} _{i-1})\right)}
with gain
k
i
>
0
{\displaystyle k_{i}>0}
. By Equation (8), the corresponding Lyapunov function is
V
i
(
x
i
)
=
V
i
−
1
(
x
i
−
1
)
+
1
2
(
z
i
−
u
i
−
1
(
x
i
−
1
)
)
2
{\displaystyle V_{i}(\mathbf {x} _{i})=V_{i-1}(\mathbf {x} _{i-1})+{\frac {1}{2}}(z_{i}-u_{i-1}(\mathbf {x} _{i-1}))^{2}}
By this construction, the ultimate control
u
(
x
,
z
1
,
z
2
,
…
,
z
k
)
=
u
k
(
x
k
)
{\displaystyle u(\mathbf {x} ,z_{1},z_{2},\ldots ,z_{k})=u_{k}(\mathbf {x} _{k})}
(i.e., ultimate control is found at final iteration
i
=
k
{\displaystyle i=k}
).
Hence, any strict-feedback system can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an adaptive control algorithm).
See also
Nonlinear control
Strict-feedback form
Robust control
Adaptive control
References
Kata Kunci Pencarian:
- Bob Foster (akademisi)
- Backstepping
- Miroslav Krstić
- The Lambeth Walk
- Fly River
- Rock climbing
- Nonlinear control
- Control theory
- Past sea level
- List of chaotic maps
- Petar V. Kokotovic