- Source: Stochastic game
In game theory, a stochastic game (or Markov game), introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.
Stochastic games generalize Markov decision processes to multiple interacting decision makers, as well as strategic-form games to dynamic situations in which the environment changes in response to the players’ choices.
Two-player games
Stochastic two-player games on directed graphs are widely used for modeling and analysis of discrete systems operating in an unknown (adversarial) environment. Possible configurations of a system and its environment are represented as vertices, and the transitions correspond to actions of the system, its environment, or "nature". A run of the system then corresponds to an infinite path in the graph. Thus, a system and its environment can be seen as two players with antagonistic objectives, where one player (the system) aims at maximizing the probability of "good" runs, while the other player (the environment) aims at the opposite.
In many cases, there exists an equilibrium value of this probability, but optimal strategies for both players may not exist.
We introduce basic concepts and algorithmic questions studied in this area, and we mention some long-standing open problems. Then, we mention selected recent results.
Theory
The ingredients of a stochastic game are: a finite set of players
I
{\displaystyle I}
; a state space
S
{\displaystyle S}
(either a finite set or a measurable space
(
S
,
S
)
{\displaystyle (S,{\mathcal {S}})}
); for each player
i
∈
I
{\displaystyle i\in I}
, an action set
A
i
{\displaystyle A^{i}}
(either a finite set or a measurable space
(
A
i
,
A
i
)
{\displaystyle (A^{i},{\mathcal {A}}^{i})}
); a transition probability
P
{\displaystyle P}
from
S
×
A
{\displaystyle S\times A}
, where
A
=
×
i
∈
I
A
i
{\displaystyle A=\times _{i\in I}A^{i}}
is the action profiles, to
S
{\displaystyle S}
, where
P
(
S
∣
s
,
a
)
{\displaystyle P(S\mid s,a)}
is the probability that the next state is in
S
{\displaystyle S}
given the current state
s
{\displaystyle s}
and the current action profile
a
{\displaystyle a}
; and a payoff function
g
{\displaystyle g}
from
S
×
A
{\displaystyle S\times A}
to
R
I
{\displaystyle R^{I}}
, where the
i
{\displaystyle i}
-th coordinate of
g
{\displaystyle g}
,
g
i
{\displaystyle g^{i}}
, is the payoff to player
i
{\displaystyle i}
as a function of the state
s
{\displaystyle s}
and the action profile
a
{\displaystyle a}
.
The game starts at some initial state
s
1
{\displaystyle s_{1}}
. At stage
t
{\displaystyle t}
, players first observe
s
t
{\displaystyle s_{t}}
, then simultaneously choose actions
a
t
i
∈
A
i
{\displaystyle a_{t}^{i}\in A^{i}}
, then observe the action profile
a
t
=
(
a
t
i
)
i
{\displaystyle a_{t}=(a_{t}^{i})_{i}}
, and then nature selects
s
t
+
1
{\displaystyle s_{t+1}}
according to the probability
P
(
⋅
∣
s
t
,
a
t
)
{\displaystyle P(\cdot \mid s_{t},a_{t})}
. A play of the stochastic game,
s
1
,
a
1
,
…
,
s
t
,
a
t
,
…
{\displaystyle s_{1},a_{1},\ldots ,s_{t},a_{t},\ldots }
,
defines a stream of payoffs
g
1
,
g
2
,
…
{\displaystyle g_{1},g_{2},\ldots }
, where
g
t
=
g
(
s
t
,
a
t
)
{\displaystyle g_{t}=g(s_{t},a_{t})}
.
The discounted game
Γ
λ
{\displaystyle \Gamma _{\lambda }}
with discount factor
λ
{\displaystyle \lambda }
(
0
<
λ
≤
1
{\displaystyle 0<\lambda \leq 1}
) is the game where the payoff to player
i
{\displaystyle i}
is
λ
∑
t
=
1
∞
(
1
−
λ
)
t
−
1
g
t
i
{\displaystyle \lambda \sum _{t=1}^{\infty }(1-\lambda )^{t-1}g_{t}^{i}}
. The
n
{\displaystyle n}
-stage game
is the game where the payoff to player
i
{\displaystyle i}
is
g
¯
n
i
:=
1
n
∑
t
=
1
n
g
t
i
{\displaystyle {\bar {g}}_{n}^{i}:={\frac {1}{n}}\sum _{t=1}^{n}g_{t}^{i}}
.
The value
v
n
(
s
1
)
{\displaystyle v_{n}(s_{1})}
, respectively
v
λ
(
s
1
)
{\displaystyle v_{\lambda }(s_{1})}
, of a two-person zero-sum stochastic game
Γ
n
{\displaystyle \Gamma _{n}}
, respectively
Γ
λ
{\displaystyle \Gamma _{\lambda }}
, with finitely many states and actions exists, and Truman Bewley and Elon Kohlberg (1976) proved that
v
n
(
s
1
)
{\displaystyle v_{n}(s_{1})}
converges to a limit as
n
{\displaystyle n}
goes to infinity and that
v
λ
(
s
1
)
{\displaystyle v_{\lambda }(s_{1})}
converges to the same limit as
λ
{\displaystyle \lambda }
goes to
0
{\displaystyle 0}
.
The "undiscounted" game
Γ
∞
{\displaystyle \Gamma _{\infty }}
is the game where the payoff to player
i
{\displaystyle i}
is the "limit" of the averages of the stage payoffs. Some precautions are needed in defining the value of a two-person zero-sum
Γ
∞
{\displaystyle \Gamma _{\infty }}
and in defining equilibrium payoffs of a non-zero-sum
Γ
∞
{\displaystyle \Gamma _{\infty }}
. The uniform value
v
∞
{\displaystyle v_{\infty }}
of a two-person zero-sum stochastic game
Γ
∞
{\displaystyle \Gamma _{\infty }}
exists if for every
ε
>
0
{\displaystyle \varepsilon >0}
there is a positive integer
N
{\displaystyle N}
and a strategy pair
σ
ε
{\displaystyle \sigma _{\varepsilon }}
of player 1 and
τ
ε
{\displaystyle \tau _{\varepsilon }}
of player 2 such that for every
σ
{\displaystyle \sigma }
and
τ
{\displaystyle \tau }
and every
n
≥
N
{\displaystyle n\geq N}
the expectation of
g
¯
n
i
{\displaystyle {\bar {g}}_{n}^{i}}
with respect to the probability on plays defined by
σ
ε
{\displaystyle \sigma _{\varepsilon }}
and
τ
{\displaystyle \tau }
is at least
v
∞
−
ε
{\displaystyle v_{\infty }-\varepsilon }
, and the expectation of
g
¯
n
i
{\displaystyle {\bar {g}}_{n}^{i}}
with respect to the probability on plays defined by
σ
{\displaystyle \sigma }
and
τ
ε
{\displaystyle \tau _{\varepsilon }}
is at most
v
∞
+
ε
{\displaystyle v_{\infty }+\varepsilon }
. Jean-François Mertens and Abraham Neyman (1981) proved that every two-person zero-sum stochastic game with finitely many states and actions has a uniform value.
If there is a finite number of players and the action sets and the set of states are finite, then a stochastic game with a finite number of stages always has a Nash equilibrium. The same is true for a game with infinitely many stages if the total payoff is the discounted sum.
The non-zero-sum stochastic game
Γ
∞
{\displaystyle \Gamma _{\infty }}
has a uniform equilibrium payoff
v
∞
{\displaystyle v_{\infty }}
if for every
ε
>
0
{\displaystyle \varepsilon >0}
there is a positive integer
N
{\displaystyle N}
and a strategy profile
σ
{\displaystyle \sigma }
such that for every unilateral deviation by a player
i
{\displaystyle i}
, i.e., a strategy profile
τ
{\displaystyle \tau }
with
σ
j
=
τ
j
{\displaystyle \sigma ^{j}=\tau ^{j}}
for all
j
≠
i
{\displaystyle j\neq i}
, and every
n
≥
N
{\displaystyle n\geq N}
the expectation of
g
¯
n
i
{\displaystyle {\bar {g}}_{n}^{i}}
with respect to the probability on plays defined by
σ
{\displaystyle \sigma }
is at least
v
∞
i
−
ε
{\displaystyle v_{\infty }^{i}-\varepsilon }
, and the expectation of
g
¯
n
i
{\displaystyle {\bar {g}}_{n}^{i}}
with respect to the probability on plays defined by
τ
{\displaystyle \tau }
is at most
v
∞
i
+
ε
{\displaystyle v_{\infty }^{i}+\varepsilon }
. Nicolas Vieille has shown that all two-person stochastic games with finite state and action spaces have a uniform equilibrium payoff.
The non-zero-sum stochastic game
Γ
∞
{\displaystyle \Gamma _{\infty }}
has a limiting-average equilibrium payoff
v
∞
{\displaystyle v_{\infty }}
if for every
ε
>
0
{\displaystyle \varepsilon >0}
there is a strategy profile
σ
{\displaystyle \sigma }
such that for every unilateral deviation by a player
i
{\displaystyle i}
, the expectation of the limit inferior of the averages of the stage payoffs with respect to the probability on plays defined by
σ
{\displaystyle \sigma }
is at least
v
∞
i
−
ε
{\displaystyle v_{\infty }^{i}-\varepsilon }
, and the expectation of the limit superior of the averages of the stage payoffs with respect to the probability on plays defined by
τ
{\displaystyle \tau }
is at most
v
∞
i
+
ε
{\displaystyle v_{\infty }^{i}+\varepsilon }
. Jean-François Mertens and Abraham Neyman (1981) proves that every two-person zero-sum stochastic game with finitely many states and actions has a limiting-average value, and Nicolas Vieille has shown that all two-person stochastic games with finite state and action spaces have a limiting-average equilibrium payoff. In particular, these results imply that these games have a value and an approximate equilibrium payoff, called the liminf-average (respectively, the limsup-average) equilibrium payoff, when the total payoff is the limit inferior (or the limit superior) of the averages of the stage payoffs.
Whether every stochastic game with finitely many players, states, and actions, has a uniform equilibrium payoff, or a limiting-average equilibrium payoff, or even a liminf-average equilibrium payoff, is a challenging open question.
A Markov perfect equilibrium is a refinement of the concept of sub-game perfect Nash equilibrium to stochastic games.
Stochastic games have been combined with Bayesian games to model uncertainty over player strategies. The resulting stochastic Bayesian game model is solved via a recursive combination of the Bayesian Nash equilibrium equation and the Bellman optimality equation.
Applications
Stochastic games have applications in economics, evolutionary biology and computer networks. They are generalizations of repeated games which correspond to the special case where there is only one state.
See also
Stochastic process
Notes
Further reading
Filar, J. & Vrieze, K. (1997). Competitive Markov Decision Processes. Springer-Verlag. ISBN 0-387-94805-8.
Neyman, A. & Sorin, S. (2003). Stochastic Games and Applications. Dordrecht: Kluwer Academic Press. ISBN 1-4020-1492-9.
Yoav Shoham; Kevin Leyton-Brown (2009). Multiagent systems: algorithmic, game-theoretic, and logical foundations. Cambridge University Press. pp. 153–156. ISBN 978-0-521-89943-7. (suitable for undergraduates; main results, no proofs)
External links
Lecture on Stochastic Two-Player Games by Antonin Kucera
Kata Kunci Pencarian:
- Pemelajaran mesin daring
- Penghargaan Nebula untuk Novel Terbaik
- Ikan dayung amerika
- Penghargaan Hugo untuk Novel Terbaik
- Stochastic game
- Stochastic
- Game theory
- Stochastic matrix
- Stochastic process
- Stochastic control
- Monty Hall problem
- Chopsticks (hand game)
- Tic-tac-toe
- No-win situation