- Source: Navigation function
- Bahasa Inggris
- Dynamic HTML
- KAI KF-21 Boramae
- Spermaceti
- PlayStation 3
- Laut
- Lari salmon
- Albatros
- CORDIC
- Fregat kelas Formidable
- Navigation function
- Inverse trigonometric functions
- Area navigation
- Navigation
- Personal navigation assistant
- Motion planning
- Haversine formula
- Trigonometric functions
- Celestial navigation
- Satellite navigation
Navigation function usually refers to a function of position, velocity, acceleration and time which is used to plan robot trajectories through the environment. Generally, the goal of a navigation function is to create feasible, safe paths that avoid obstacles while allowing a robot to move from its starting configuration to its goal configuration.
Potential functions as navigation functions
Potential functions assume that the environment or work space is known. Obstacles are assigned a high potential value, and the goal position is assigned a low potential. To reach the goal position, a robot only needs to follow the negative gradient of the surface.
We can formalize this concept mathematically as following: Let
X
{\displaystyle X}
be the state space of all possible configurations of a robot. Let
X
g
⊂
X
{\displaystyle X_{g}\subset X}
denote the goal region of the state space.
Then a potential function
ϕ
(
x
)
{\displaystyle \phi (x)}
is called a (feasible) navigation function if
ϕ
(
x
)
=
0
∀
x
∈
X
g
{\displaystyle \phi (x)=0\ \forall x\in X_{g}}
ϕ
(
x
)
=
∞
{\displaystyle \phi (x)=\infty }
if and only if no point in
X
g
{\displaystyle {X_{g}}}
is reachable from
x
{\displaystyle x}
.
For every reachable state,
x
∈
X
∖
X
g
{\displaystyle x\in X\setminus {X_{g}}}
, the local operator produces a state
x
′
{\displaystyle x'}
for which
ϕ
(
x
′
)
<
ϕ
(
x
)
{\displaystyle \phi (x')<\phi (x)}
.
Probabilistic navigation function
Probabilistic navigation function is an extension of the classical navigation function for static stochastic scenarios. The function is defined by permitted collision probability, which limits the risk during motion. The Minkowski sum used for in the classical definition is replaced with a convolution of the geometries and the Probability Density Functions of locations. Denoting the target position by
x
d
{\displaystyle x_{d}}
, the Probabilistic navigation function is defined as:
φ
(
x
)
=
γ
d
(
x
)
[
γ
d
K
(
x
)
+
β
(
x
)
]
1
K
{\displaystyle {\varphi }(x)={\frac {\gamma _{d}(x)}{{\left[{\gamma _{d}^{K}(x)+\beta \left(x\right)}\right]}^{\frac {1}{K}}}}}
where
K
{\displaystyle K}
is a predefined constant like in the classical navigation function, which ensures the Morse nature of the function.
γ
d
(
x
)
{\displaystyle \gamma _{d}(x)}
is the distance to the target position
|
|
x
−
x
d
|
|
2
{\displaystyle {||x-{x_{d}}|{|^{2}}}}
, and
β
(
x
)
{\displaystyle \beta \left(x\right)}
takes into account all obstacles, defined as
β
(
x
)
=
∏
i
=
0
N
o
β
i
(
x
)
{\displaystyle \beta \left(x\right)=\prod \limits _{i=0}^{N_{o}}{{\beta _{i}}\left(x\right)}}
where
β
i
(
x
)
{\displaystyle \beta _{i}(x)}
is based on the probability for a collision at location
x
{\displaystyle x}
. The probability for a collision is limited by a predetermined value
Δ
{\displaystyle \Delta }
, meaning:
β
i
(
x
)
=
Δ
−
p
i
(
x
)
{\displaystyle \beta _{i}(x)=\Delta -p^{i}\left(x\right)}
and,
β
0
(
x
)
=
−
Δ
+
p
0
(
x
)
{\displaystyle \beta _{0}(x)=-\Delta +p^{0}\left(x\right)}
where
p
i
(
x
)
{\displaystyle p^{i}(x)}
is the probability to collide with the i-th obstacle.
A map
φ
{\displaystyle \varphi }
is said to be a probabilistic navigation function if it satisfies the following conditions:
It is a navigation function.
The probability for a collision is bounded by a predefined probability
Δ
{\displaystyle \Delta }
.
Navigation Function in Optimal Control
While for certain applications, it suffices to have a feasible navigation function, in many cases it is desirable to have an optimal navigation function with respect to a given cost functional
J
{\displaystyle J}
. Formalized as an optimal control problem, we can write
minimize
J
(
x
1
:
T
,
u
1
:
T
)
=
∫
T
L
(
x
t
,
u
t
,
t
)
d
t
{\displaystyle {\text{minimize }}J(x_{1:T},u_{1:T})=\int \limits _{T}L(x_{t},u_{t},t)dt}
subject to
x
t
˙
=
f
(
x
t
,
u
t
)
{\displaystyle {\text{subject to }}{\dot {x_{t}}}=f(x_{t},u_{t})}
whereby
x
{\displaystyle x}
is the state,
u
{\displaystyle u}
is the control to apply,
L
{\displaystyle L}
is a cost at a certain state
x
{\displaystyle x}
if we apply a control
u
{\displaystyle u}
, and
f
{\displaystyle f}
models the transition dynamics of the system.
Applying Bellman's principle of optimality the optimal cost-to-go function is defined as
ϕ
(
x
t
)
=
min
u
t
∈
U
(
x
t
)
{
L
(
x
t
,
u
t
)
+
ϕ
(
f
(
x
t
,
u
t
)
)
}
{\displaystyle \displaystyle \phi (x_{t})=\min _{u_{t}\in U(x_{t})}{\Big \{}L(x_{t},u_{t})+\phi (f(x_{t},u_{t})){\Big \}}}
Together with the above defined axioms we can define the optimal navigation function as
ϕ
(
x
)
=
0
∀
x
∈
X
g
{\displaystyle \phi (x)=0\ \forall x\in X_{g}}
ϕ
(
x
)
=
∞
{\displaystyle \phi (x)=\infty }
if and only if no point in
X
G
{\displaystyle {X_{G}}}
is reachable from
x
{\displaystyle x}
.
For every reachable state,
x
∈
X
∖
X
G
{\displaystyle x\in X\setminus {X_{G}}}
, the local operator produces a state
x
′
{\displaystyle x'}
for which
ϕ
(
x
′
)
<
ϕ
(
x
)
{\displaystyle \phi (x')<\phi (x)}
.
ϕ
(
x
t
)
=
min
u
t
∈
U
(
x
t
)
{
L
(
x
t
,
u
t
)
+
ϕ
(
f
(
x
t
,
u
t
)
)
}
{\displaystyle \displaystyle \phi (x_{t})=\min _{u_{t}\in U(x_{t})}{\Big \{}L(x_{t},u_{t})+\phi (f(x_{t},u_{t})){\Big \}}}
Even if a navigation function is an example for reactive control, it can be utilized for optimal control problems as well which includes planning capabilities.
Stochastic Navigation Function
If we assume the transition dynamics of the system or the cost function as subjected to noise, we obtain a stochastic optimal control problem with a cost
J
(
x
t
,
u
t
)
{\displaystyle J(x_{t},u_{t})}
and dynamics
f
{\displaystyle f}
. In the field of reinforcement learning the cost is replaced by a reward function
R
(
x
t
,
u
t
)
{\displaystyle R(x_{t},u_{t})}
and the dynamics by the transition probabilities
P
(
x
t
+
1
|
x
t
,
u
t
)
{\displaystyle P(x_{t+1}|x_{t},u_{t})}
.
See also
Control Theory
Optimal Control
Robot Control
Motion Planning
Reinforcement Learning
References
Sources
LaValle, Steven M. (2006), Planning Algorithms (First ed.), Cambridge University Press, ISBN 978-0-521-86205-9
Laumond, Jean-Paul (1998), Robot Motion Planning and Control (First ed.), Springer, ISBN 3-540-76219-1
External links
NFsim: MATLAB Toolbox for motion planning using Navigation Functions.