- Source: Monte Carlo localization
Monte Carlo localization (MCL), also known as particle filter localization, is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot.
Basic description
Consider a robot with an internal map of its environment. When the robot moves around, it needs to know where it is within this map. Determining its location and rotation (more generally, the pose) by using its sensor observations is known as robot localization.
Because the robot may not always behave in a perfectly predictable way, it generates many random guesses of where it is going to be next. These guesses are known as particles. Each particle contains a full description of a possible future state. When the robot observes the environment, it discards particles inconsistent with this observation, and generates more particles close to those that appear consistent. In the end, hopefully most particles converge to where the robot actually is.
State representation
The state of the robot depends on the application and design. For example, the state of a typical 2D robot may consist of a tuple
(
x
,
y
,
θ
)
{\displaystyle (x,y,\theta )}
for position
x
,
y
{\displaystyle x,y}
and orientation
θ
{\displaystyle \theta }
. For a robotic arm with 10 joints, it may be a tuple containing the angle at each joint:
(
θ
1
,
θ
2
,
.
.
.
,
θ
10
)
{\displaystyle (\theta _{1},\theta _{2},...,\theta _{10})}
.
The belief, which is the robot's estimate of its current state, is a probability density function distributed over the state space. In the MCL algorithm, the belief at a time
t
{\displaystyle t}
is represented by a set of
M
{\displaystyle M}
particles
X
t
=
{
x
t
[
1
]
,
x
t
[
2
]
,
…
,
x
t
[
M
]
}
{\displaystyle X_{t}=\lbrace x_{t}^{[1]},x_{t}^{[2]},\ldots ,x_{t}^{[M]}\rbrace }
. Each particle contains a state, and can thus be considered a hypothesis of the robot's state. Regions in the state space with many particles correspond to a greater probability that the robot will be there—and regions with few particles are unlikely to be where the robot is.
The algorithm assumes the Markov property that the current state's probability distribution depends only on the previous state (and not any ones before that), i.e.,
X
t
{\displaystyle X_{t}}
depends only on
X
t
−
1
{\displaystyle X_{t-1}}
. This only works if the environment is static and does not change with time. Typically, on start up, the robot has no information on its current pose so the particles are uniformly distributed over the configuration space.
Overview
Given a map of the environment, the goal of the algorithm is for the robot to determine its pose within the environment.
At every time
t
{\displaystyle t}
the algorithm takes as input the previous belief
X
t
−
1
=
{
x
t
−
1
[
1
]
,
x
t
−
1
[
2
]
,
…
,
x
t
−
1
[
M
]
}
{\displaystyle X_{t-1}=\lbrace x_{t-1}^{[1]},x_{t-1}^{[2]},\ldots ,x_{t-1}^{[M]}\rbrace }
, an actuation command
u
t
{\displaystyle u_{t}}
, and data received from sensors
z
t
{\displaystyle z_{t}}
; and the algorithm outputs the new belief
X
t
{\displaystyle X_{t}}
.
Algorithm MCL
(
X
t
−
1
,
u
t
,
z
t
)
{\displaystyle (X_{t-1},u_{t},z_{t})}
:
X
t
¯
=
X
t
=
∅
{\displaystyle {\bar {X_{t}}}=X_{t}=\emptyset }
for
m
=
1
{\displaystyle m=1}
to
M
{\displaystyle M}
:
x
t
[
m
]
=
{\displaystyle x_{t}^{[m]}=}
motion_update
(
u
t
,
x
t
−
1
[
m
]
)
{\displaystyle (u_{t},x_{t-1}^{[m]})}
w
t
[
m
]
=
{\displaystyle w_{t}^{[m]}=}
sensor_update
(
z
t
,
x
t
[
m
]
)
{\displaystyle (z_{t},x_{t}^{[m]})}
X
t
¯
=
X
t
¯
+
⟨
x
t
[
m
]
,
w
t
[
m
]
⟩
{\displaystyle {\bar {X_{t}}}={\bar {X_{t}}}+\langle x_{t}^{[m]},w_{t}^{[m]}\rangle }
endfor
for
m
=
1
{\displaystyle m=1}
to
M
{\displaystyle M}
:
draw
x
t
[
m
]
{\displaystyle x_{t}^{[m]}}
from
X
t
¯
{\displaystyle {\bar {X_{t}}}}
with probability
∝
w
t
[
m
]
{\displaystyle \propto w_{t}^{[m]}}
X
t
=
X
t
+
x
t
[
m
]
{\displaystyle X_{t}=X_{t}+x_{t}^{[m]}}
endfor
return
X
t
{\displaystyle X_{t}}
= Example for 1D robot
=Consider a robot in a one-dimensional circular corridor with three identical doors, using a sensor that returns either true or false depending on whether there is a door.
At the end of the three iterations, most of the particles are converged on the actual position of the robot as desired.
Motion update
During the motion update, the robot predicts its new location based on the actuation command given, by applying the simulated motion to each of the particles. For example, if a robot moves forward, all particles move forward in their own directions no matter which way they point. If a robot rotates 90 degrees clockwise, all particles rotate 90 degrees clockwise, regardless of where they are. However, in the real world, no actuator is perfect: they may overshoot or undershoot the desired amount of motion. When a robot tries to drive in a straight line, it inevitably curves to one side or the other due to minute differences in wheel radius. Hence, the motion model must compensate for noise. Inevitably, the particles diverge during the motion update as a consequence. This is expected since a robot becomes less sure of its position if it moves blindly without sensing the environment.
Sensor update
When the robot senses its environment, it updates its particles to more accurately reflect where it is. For each particle, the robot computes the probability that, had it been at the state of the particle, it would perceive what its sensors have actually sensed. It assigns a weight
w
t
[
i
]
{\displaystyle w_{t}^{[i]}}
for each particle proportional to the said probability. Then, it randomly draws
M
{\displaystyle M}
new particles from the previous belief, with probability proportional to
w
t
[
i
]
{\displaystyle w_{t}^{[i]}}
. Particles consistent with sensor readings are more likely to be chosen (possibly more than once) and particles inconsistent with sensor readings are rarely picked. As such, particles converge towards a better estimate of the robot's state. This is expected since a robot becomes increasingly sure of its position as it senses its environment.
Properties
= Non-parametricity
=The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation. Some other Bayesian localization algorithms, such as the Kalman filter (and variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal. For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at. In such situations, the particle filter can give better performance than parametric filters.
Another non-parametric approach to Markov localization is the grid-based localization, which uses a histogram to represent the belief distribution. Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized.
= Computational requirements
=The particle filter's time complexity is linear with respect to the number of particles. Naturally, the more particles, the better the accuracy, so there is a compromise between speed and accuracy and it is desired to find an optimal value of
M
{\displaystyle M}
. One strategy to select
M
{\displaystyle M}
is to continuously generate additional particles until the next pair of command
u
t
{\displaystyle u_{t}}
and sensor reading
z
t
{\displaystyle z_{t}}
has arrived. This way, the greatest possible number of particles is obtained while not impeding the function of the rest of the robot. As such, the implementation is adaptive to available computational resources: the faster the processor, the more particles can be generated and therefore the more accurate the algorithm is.
Compared to grid-based Markov localization, Monte Carlo localization has reduced memory usage since memory usage only depends on number of particles and does not scale with size of the map, and can integrate measurements at a much higher frequency.
The algorithm can be improved using KLD sampling, as described below, which adapts the number of particles to use based on how sure the robot is of its position.
= Particle deprivation
=A drawback of the naive implementation of Monte Carlo localization occurs in a scenario where a robot sits at one spot and repeatedly senses the environment without moving. Suppose that the particles all converge towards an erroneous state, or if an occult hand picks up the robot and moves it to a new location after particles have already converged. As particles far away from the converged state are rarely selected for the next iteration, they become scarcer on each iteration until they disappear altogether. At this point, the algorithm is unable to recover. This problem is more likely to occur for small number of particles, e.g.,
M
≤
50
{\displaystyle M\leq 50}
, and when the particles are spread over a large state space. In fact, any particle filter algorithm may accidentally discard all particles near the correct state during the resampling step.
One way to mitigate this issue is to randomly add extra particles on every iteration. This is equivalent to assuming that, at any point in time, the robot has some small probability of being kidnapped to a random position in the map, thus causing a fraction of random states in the motion model. By guaranteeing that no area in the map is totally deprived of particles, the algorithm is now robust against particle deprivation.
Variants
The original Monte Carlo localization algorithm is fairly simple. Several variants of the algorithm have been proposed, which address its shortcomings or adapt it to be more effective in certain situations.
= KLD sampling
=Monte Carlo localization may be improved by sampling the particles in an adaptive manner based on an error estimate using the Kullback–Leibler divergence (KLD). Initially, it is necessary to use a large
M
{\displaystyle M}
due to the need to cover the entire map with a uniformly random distribution of particles. However, when the particles have converged around the same location, maintaining such a large sample size is computationally wasteful.
KLD–sampling is a variant of Monte Carlo Localization where at each iteration, a sample size
M
x
{\displaystyle M_{x}}
is calculated. The sample size
M
x
{\displaystyle M_{x}}
is calculated such that, with probability
1
−
δ
{\displaystyle 1-\delta }
, the error between the true posterior and the sample-based approximation is less than
ϵ
{\displaystyle \epsilon }
. The variables
δ
{\displaystyle \delta }
and
ϵ
{\displaystyle \epsilon }
are fixed parameters.
The main idea is to create a grid (a histogram) overlaid on the state space. Each bin in the histogram is initially empty. At each iteration, a new particle is drawn from the previous (weighted) particle set with probability proportional to its weight. Instead of the resampling done in classic MCL, the KLD–sampling algorithm draws particles from the previous, weighted, particle set and applies the motion and sensor updates before placing the particle into its bin. The algorithm keeps track of the number of non-empty bins,
k
{\displaystyle k}
. If a particle is inserted in a previously empty bin, the value of
M
x
{\displaystyle M_{x}}
is recalculated, which increases mostly linear in
k
{\displaystyle k}
. This is repeated until the sample size
M
{\displaystyle M}
is the same as
M
x
{\displaystyle M_{x}}
.
It is easy to see KLD–sampling culls redundant particles from the particle set, by only increasing
M
x
{\displaystyle M_{x}}
when a new location (bin) has been filled. In practice, KLD–sampling consistently outperforms and converges faster than classic MCL.
References
Kata Kunci Pencarian:
- Pemetaan dan lokalisasi serentak
- Protein
- Monte Carlo localization
- Monte Carlo method
- Monte Carlo (disambiguation)
- Simultaneous localization and mapping
- Particle filter
- Monte Carlo integration
- Dead reckoning
- Sebastian Thrun
- MCL
- Frank Dellaert