- Source: Kelly criterion
In probability theory, the Kelly criterion (or Kelly strategy or Kelly bet) is a formula for sizing a sequence of bets by maximizing the long-term expected value of the logarithm of wealth, which is equivalent to maximizing the long-term expected geometric growth rate. John Larry Kelly Jr., a researcher at Bell Labs, described the criterion in 1956.
The practical use of the formula has been demonstrated for gambling, and the same idea was used to explain diversification in investment management. In the 2000s, Kelly-style analysis became a part of mainstream investment theory and the claim has been made that well-known successful investors including Warren Buffett and Bill Gross use Kelly methods. Also see intertemporal portfolio choice. It is also the standard replacement of statistical power in anytime-valid statistical tests and confidence intervals, based on e-values and e-processes.
Kelly criterion for binary return rates
In a system where the return on an investment or a bet is binary, so an interested party either wins or loses a fixed percentage of their bet, the expected growth rate coefficient yields a very specific solution for an optimal betting percentage.
= Gambling Formula
=Where losing the bet involves losing the entire wager, the Kelly bet is:
f
∗
=
p
−
q
b
=
p
−
1
−
p
b
{\displaystyle f^{*}=p-{\frac {q}{b}}=p-{\frac {1-p}{b}}}
where:
f
∗
{\displaystyle f^{*}}
is the fraction of the current bankroll to wager.
p
{\displaystyle p}
is the probability of a win.
q
=
1
−
p
{\displaystyle q=1-p}
is the probability of a loss.
b
{\displaystyle b}
is the proportion of the bet gained with a win. E.g., if betting $10 on a 2-to-1 odds bet (upon win you are returned $30, winning you $20), then
b
=
$
20
/
$
10
=
2.0
{\displaystyle b=\$20/\$10=2.0}
.
As an example, if a gamble has a 60% chance of winning (
p
=
0.6
{\displaystyle p=0.6}
,
q
=
0.4
{\displaystyle q=0.4}
), and the gambler receives 1-to-1 odds on a winning bet (
b
=
1
{\displaystyle b=1}
), then to maximize the long-run growth rate of the bankroll, the gambler should bet 20% of the bankroll at each opportunity (
f
∗
=
0.6
−
0.4
1
=
0.2
{\textstyle f^{*}=0.6-{\frac {0.4}{1}}=0.2}
).
If the gambler has zero edge (i.e., if
b
=
q
/
p
{\displaystyle b=q/p}
), then the criterion recommends the gambler bet nothing.
If the edge is negative (
b
<
q
/
p
{\displaystyle b
), the formula gives a negative result, indicating that the gambler should take the other side of the bet.
= Investment formula
=A more general form of the Kelly formula allows for partial losses, which is relevant for investments:: 7
f
∗
=
p
l
−
q
g
{\displaystyle f^{*}={\frac {p}{l}}-{\frac {q}{g}}}
where:
f
∗
{\displaystyle f^{*}}
is the fraction of the assets to apply to the security.
p
{\displaystyle p}
is the probability that the investment increases in value.
q
{\displaystyle q}
is the probability that the investment decreases in value (
q
=
1
−
p
{\displaystyle q=1-p}
).
g
{\displaystyle g}
is the fraction that is gained in a positive outcome.: 7 If the security price rises 10%, then
g
=
final value
−
original value
original value
=
1.1
−
1
1
=
0.1
{\displaystyle g={\frac {{\text{final value}}-{\text{original value}}}{\text{original value}}}={\frac {1.1-1}{1}}=0.1}
.
l
{\displaystyle l}
is the fraction that is lost in a negative outcome.: 7 If the security price falls 10%, then
l
=
original value
−
final value
original value
=
1
−
.9
1
=
0.1
{\displaystyle l={\frac {{\text{original value}}-{\text{final value}}}{\text{original value}}}={\frac {1-.9}{1}}=0.1}
Note that the Kelly criterion is perfectly valid only for fully known outcome probabilities, which is almost never the case with investments. In addition, risk-averse strategies invest less than the full Kelly fraction.
The general form can be rewritten as follows
f
∗
=
p
l
(
1
−
1
−
p
p
l
g
)
=
p
l
(
1
−
1
W
L
P
1
W
L
R
)
{\displaystyle f^{*}={\frac {p}{l}}\left(1-{\frac {1-p}{p}}{\frac {l}{g}}\right)={\frac {p}{l}}\left(1-{\frac {1}{WLP}}{\frac {1}{WLR}}\right)}
where:
W
L
P
=
p
1
−
p
{\displaystyle WLP={\frac {p}{1-p}}}
is the win-loss probability (WLP) ratio, which is the ratio of winning to losing bets.
W
L
R
=
g
l
{\displaystyle WLR={\frac {g}{l}}}
is the win-loss ratio (WLR) of bet outcomes, which is the winning skew.
It is clear that, at least, one of the factors
W
L
P
{\displaystyle WLP}
or
W
L
R
{\displaystyle WLR}
needs to be larger than 1 for having an edge (so
f
∗
>
0
{\displaystyle f^{*}>0}
). It is even possible that the win-loss probability ratio is unfavorable
W
L
P
<
1
{\displaystyle WLP<1}
, but one has an edge as long as
W
L
P
∗
W
L
R
>
1
{\displaystyle WLP*WLR>1}
.
The Kelly formula can easily result in a fraction higher than 1, such as with losing size
l
≪
1
{\displaystyle l\ll 1}
(see the above expression with factors of
W
L
R
{\displaystyle WLR}
and
W
L
P
{\displaystyle WLP}
). This happens somewhat counterintuitively, because the Kelly fraction formula compensates for a small losing size with a larger bet. However, in most real situations, there is high uncertainty about all parameters entering the Kelly formula. In the case of a Kelly fraction higher than 1, it is theoretically advantageous to use leverage to purchase additional securities on margin.
= Betting example – behavioural experiment
=In a study, each participant was given $25 and asked to place even-money bets on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. But the behavior of the test subjects was far from optimal:
Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment.
Using the Kelly criterion and based on the odds in the experiment (ignoring the cap of $250 and the finite duration of the test), the right approach would be to bet 20% of one's bankroll on each toss of the coin, which works out to a 2.034% average gain each round. This is a geometric mean, not the arithmetic rate of 4% (r = 0.2 x (0.6 - 0.4) = 0.04). The theoretical expected wealth after 300 rounds works out to $10,505 (
=
25
⋅
(
1.02034
)
300
{\displaystyle =25\cdot (1.02034)^{300}}
) if it were not capped.
In this particular game, because of the cap, a strategy of betting only 12% of the pot on each toss would have even better results (a 95% probability of reaching the cap and an average payout of $242.03).
= Proof
=Heuristic proofs of the Kelly criterion are straightforward. The Kelly criterion maximizes the expected value of the logarithm of wealth (the expectation value of a function is given by the sum, over all possible outcomes, of the probability of each particular outcome multiplied by the value of the function in the event of that outcome). We start with 1 unit of wealth and bet a fraction
f
{\displaystyle f}
of that wealth on an outcome that occurs with probability
p
{\displaystyle p}
and offers odds of
b
{\displaystyle b}
. The probability of winning is
p
{\displaystyle p}
, and in that case the resulting wealth is equal to
1
+
f
b
{\displaystyle 1+fb}
. The probability of losing is
q
=
1
−
p
{\displaystyle q=1-p}
and the odds of a negative outcome is
a
{\displaystyle a}
. In that case the resulting wealth is equal to
1
−
f
a
{\displaystyle 1-fa}
. Therefore, the expected geometric growth rate
r
{\displaystyle r}
is:
r
=
(
1
+
f
b
)
p
⋅
(
1
−
f
a
)
q
{\displaystyle r=(1+fb)^{p}\cdot (1-fa)^{q}}
We want to find the maximum r of this curve (as a function of f), which involves finding the derivative of the equation. This is more easily accomplished by taking the logarithm of each side first; because the logarithm is monotonic, it does not change the locations of function extrema. The resulting equation is:
E
=
log
(
r
)
=
p
log
(
1
+
f
b
)
+
q
log
(
1
−
f
a
)
{\displaystyle E=\log(r)=p\log(1+fb)+q\log(1-fa)}
with
E
{\displaystyle E}
denoting logarithmic wealth growth. To find the value of
f
{\displaystyle f}
for which the growth rate is maximized, denoted as
f
∗
{\displaystyle f^{*}}
, we differentiate the above expression and set this equal to zero. This gives:
d
E
d
f
|
f
=
f
∗
=
p
b
1
+
f
∗
b
+
−
q
a
1
−
f
∗
a
=
0
{\displaystyle \left.{\frac {dE}{df}}\right|_{f=f^{*}}={\frac {pb}{1+f^{*}b}}+{\frac {-qa}{1-f^{*}a}}=0}
Rearranging this equation to solve for the value of
f
∗
{\displaystyle f^{*}}
gives the Kelly criterion:
f
∗
=
p
a
−
q
b
{\displaystyle f^{*}={\frac {p}{a}}-{\frac {q}{b}}}
Notice that this expression reduces to the simple gambling formula when
a
=
1
=
100
%
{\displaystyle a=1=100\%}
, when a loss results in full loss of the wager.
Kelly criterion for non-binary return rates
If the return rates on an investment or a bet are continuous in nature the optimal growth rate coefficient must take all possible events into account.
= Application to the stock market
=In mathematical finance, if security weights maximize the expected geometric growth rate (which is equivalent to maximizing log wealth), then a portfolio is growth optimal.
The Kelly Criterion shows that for a given volatile security this is satisfied when
f
∗
=
μ
−
r
σ
2
{\displaystyle f^{*}={\frac {\mu -r}{\sigma ^{2}}}}
where
f
∗
{\displaystyle f^{*}}
is the fraction of available capital invested that maximizes the expected geometric growth rate,
μ
{\displaystyle \mu }
is the expected growth rate coefficient,
σ
2
{\displaystyle \sigma ^{2}}
is the variance of the growth rate coefficient and
r
{\displaystyle r}
is the risk-free rate of return. Note that a symmetric probability density function was assumed here.
Computations of growth optimal portfolios can suffer tremendous garbage in, garbage out problems. For example, the cases below take as given the expected return and covariance structure of assets, but these parameters are at best estimates or models that have significant uncertainty. If portfolio weights are largely a function of estimation errors, then Ex-post performance of a growth-optimal portfolio may differ fantastically from the ex-ante prediction. Parameter uncertainty and estimation errors are a large topic in portfolio theory. An approach to counteract the unknown risk is to invest less than the Kelly criterion.
Rough estimates are still useful. If we take excess return 4% and volatility 16%, then yearly Sharpe ratio and Kelly ratio are calculated to be 25% and 150%. Daily Sharpe ratio and Kelly ratio are 1.7% and 150%. Sharpe ratio implies daily win probability of p=(50% + 1.7%/4), where we assumed that probability bandwidth is
4
σ
=
4
%
{\displaystyle 4\sigma =4\%}
. Now we can apply discrete Kelly formula for
f
∗
{\displaystyle f^{*}}
above with
p
=
50.425
%
,
a
=
b
=
1
%
{\displaystyle p=50.425\%,a=b=1\%}
, and we get another rough estimate for Kelly fraction
f
∗
=
85
%
{\displaystyle f^{*}=85\%}
. Both of these estimates of Kelly fraction appear quite reasonable, yet a prudent approach suggest a further multiplication of Kelly ratio by 50% (i.e. half-Kelly).
A detailed paper by Edward O. Thorp and a co-author estimates Kelly fraction to be 117% for the American stock market SP500 index.
Significant downside tail-risk for equity markets is another reason to reduce Kelly fraction from naive estimate (for instance, to reduce to half-Kelly).
Proof
A rigorous and general proof can be found in Kelly's original paper or in some of the other references listed below. Some corrections have been published.
We give the following non-rigorous argument for the case with
b
=
1
{\displaystyle b=1}
(a 50:50 "even money" bet) to show the general idea and provide some insights.
When
b
=
1
{\displaystyle b=1}
, a Kelly bettor bets
2
p
−
1
{\displaystyle 2p-1}
times their initial wealth
W
{\displaystyle W}
, as shown above. If they win, they have
2
p
W
{\displaystyle 2pW}
after one bet. If they lose, they have
2
(
1
−
p
)
W
{\displaystyle 2(1-p)W}
. Suppose they make
N
{\displaystyle N}
bets like this, and win
K
{\displaystyle K}
times out of this series of
N
{\displaystyle N}
bets. The resulting wealth will be:
2
N
p
K
(
1
−
p
)
N
−
K
W
.
{\displaystyle 2^{N}p^{K}(1-p)^{N-K}W\!.}
The ordering of the wins and losses does not affect the resulting wealth. Suppose another bettor bets a different amount,
(
2
p
−
1
+
Δ
)
W
{\displaystyle (2p-1+\Delta )W}
for some value of
Δ
{\displaystyle \Delta }
(where
Δ
{\displaystyle \Delta }
may be positive or negative). They will have
(
2
p
+
Δ
)
W
{\displaystyle (2p+\Delta )W}
after a win and
[
2
(
1
−
p
)
−
Δ
]
W
{\displaystyle [2(1-p)-\Delta ]W}
after a loss. After the same series of wins and losses as the Kelly bettor, they will have:
(
2
p
+
Δ
)
K
[
2
(
1
−
p
)
−
Δ
]
N
−
K
W
{\displaystyle (2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K}W}
Take the derivative of this with respect to
Δ
{\displaystyle \Delta }
and get:
K
(
2
p
+
Δ
)
K
−
1
[
2
(
1
−
p
)
−
Δ
]
N
−
K
W
−
(
N
−
K
)
(
2
p
+
Δ
)
K
[
2
(
1
−
p
)
−
Δ
]
N
−
K
−
1
W
{\displaystyle K(2p+\Delta )^{K-1}[2(1-p)-\Delta ]^{N-K}W-(N-K)(2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K-1}W}
The function is maximized when this derivative is equal to zero, which occurs at:
K
[
2
(
1
−
p
)
−
Δ
]
=
(
N
−
K
)
(
2
p
+
Δ
)
{\displaystyle K[2(1-p)-\Delta ]=(N-K)(2p+\Delta )}
which implies that
Δ
=
2
(
K
N
−
p
)
{\displaystyle \Delta =2\left({\frac {K}{N}}-p\right)}
but the proportion of winning bets will eventually converge to:
lim
N
→
+
∞
K
N
=
p
{\displaystyle \lim _{N\to +\infty }{\frac {K}{N}}=p}
according to the weak law of large numbers.
So in the long run, final wealth is maximized by setting
Δ
{\displaystyle \Delta }
to zero, which means following the Kelly strategy.
This illustrates that Kelly has both a deterministic and a stochastic component. If one knows K and N and wishes to pick a constant fraction of wealth to bet each time (otherwise one could cheat and, for example, bet zero after the Kth win knowing that the rest of the bets will lose), one will end up with the most money if one bets:
(
2
K
N
−
1
)
W
{\displaystyle \left(2{\frac {K}{N}}-1\right)W}
each time. This is true whether
N
{\displaystyle N}
is small or large. The "long run" part of Kelly is necessary because K is not known in advance, just that as
N
{\displaystyle N}
gets large,
K
{\displaystyle K}
will approach
p
N
{\displaystyle pN}
. Someone who bets more than Kelly can do better if
K
>
p
N
{\displaystyle K>pN}
for a stretch; someone who bets less than Kelly can do better if
K
<
p
N
{\displaystyle K
for a stretch, but in the long run, Kelly always wins.
The heuristic proof for the general case proceeds as follows.
In a single trial, if one invests the fraction
f
{\displaystyle f}
of their capital, if the strategy succeeds, the capital at the end of the trial increases by the factor
1
−
f
+
f
(
1
+
b
)
=
1
+
f
b
{\displaystyle 1-f+f(1+b)=1+fb}
, and, likewise, if the strategy fails, the capital is decreased by the factor
1
−
f
a
{\displaystyle 1-fa}
. Thus at the end of
N
{\displaystyle N}
trials (with
p
N
{\displaystyle pN}
successes and
q
N
{\displaystyle qN}
failures), the starting capital of $1 yields
C
N
=
(
1
+
f
b
)
p
N
(
1
−
f
a
)
q
N
.
{\displaystyle C_{N}=(1+fb)^{pN}(1-fa)^{qN}.}
Maximizing
log
(
C
N
)
/
N
{\displaystyle \log(C_{N})/N}
, and consequently
C
N
{\displaystyle C_{N}}
, with respect to
f
{\displaystyle f}
leads to the desired result
f
∗
=
p
/
a
−
q
/
b
.
{\displaystyle f^{*}=p/a-q/b.}
Edward O. Thorp provided a more detailed discussion of this formula for the general case. There, it can be seen that the substitution of
p
{\displaystyle p}
for the ratio of the number of "successes" to the number of trials implies that the number of trials must be very large, since
p
{\displaystyle p}
is defined as the limit of this ratio as the number of trials goes to infinity. In brief, betting
f
∗
{\displaystyle f^{*}}
each time will likely maximize the wealth growth rate only in the case where the number of trials is very large, and
p
{\displaystyle p}
and
b
{\displaystyle b}
are the same for each trial. In practice, this is a matter of playing the same game over and over, where the probability of winning and the payoff odds are always the same. In the heuristic proof above,
p
N
{\displaystyle pN}
successes and
q
N
{\displaystyle qN}
failures are highly likely only for very large
N
{\displaystyle N}
.
= Multiple outcomes
=Kelly's criterion may be generalized on gambling on many mutually exclusive outcomes, such as in horse races. Suppose there are several mutually exclusive outcomes. The probability that the
k
{\displaystyle k}
-th horse wins the race is
p
k
{\displaystyle p_{k}}
, the total amount of bets placed on
k
{\displaystyle k}
-th horse is
B
k
{\displaystyle B_{k}}
, and
β
k
=
B
k
∑
i
B
i
=
D
1
+
Q
k
,
{\displaystyle \beta _{k}={\frac {B_{k}}{\sum _{i}B_{i}}}={\frac {D}{1+Q_{k}}},}
where
Q
k
{\displaystyle Q_{k}}
are the pay-off odds.
D
=
1
−
t
t
{\displaystyle D=1-tt}
, is the dividend rate where
t
t
{\displaystyle tt}
is the track take or tax,
D
β
k
{\displaystyle {\frac {D}{\beta _{k}}}}
is the revenue rate after deduction of the track take when
k
{\displaystyle k}
-th horse wins. The fraction of the bettor's funds to bet on
k
{\displaystyle k}
-th horse is
f
k
{\displaystyle f_{k}}
. Kelly's criterion for gambling with multiple mutually exclusive outcomes gives an algorithm for finding the optimal set
S
o
{\displaystyle S^{o}}
of outcomes on which it is reasonable to bet and it gives explicit formula for finding the optimal fractions
f
k
o
{\displaystyle f_{k}^{o}}
of bettor's wealth to be bet on the outcomes included in the optimal set
S
o
{\displaystyle S^{o}}
.
The algorithm for the optimal set of outcomes consists of four steps:
Calculate the expected revenue rate for all possible (or only for several of the most promising) outcomes:
e
r
i
=
D
p
i
β
i
=
p
i
(
Q
i
+
1
)
{\displaystyle er_{i}={\frac {Dp_{i}}{\beta _{i}}}=p_{i}(Q_{i}+1)}
Reorder the outcomes so that the new sequence
e
r
k
{\displaystyle er_{k}}
is non-increasing. Thus
e
r
1
{\displaystyle er_{1}}
will be the best bet.
Set
S
=
∅
{\displaystyle S=\varnothing }
(the empty set),
k
=
1
{\displaystyle k=1}
,
R
(
S
)
=
1
{\displaystyle R(S)=1}
. Thus the best bet
e
r
k
=
e
r
1
{\displaystyle er_{k}=er_{1}}
will be considered first.
Repeat:
If
e
r
k
=
D
β
k
p
k
>
R
(
S
)
{\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)}
then insert
k
{\displaystyle k}
-th outcome into the set:
S
=
S
∪
{
k
}
{\displaystyle S=S\cup \{k\}}
, recalculate
R
(
S
)
{\displaystyle R(S)}
according to the formula:
R
(
S
)
=
D
∑
k
∉
S
p
k
D
−
∑
k
∈
S
β
k
{\displaystyle R(S)={\frac {D\sum _{k\notin S}p_{k}}{D-\sum _{k\in S}\beta _{k}}}}
and then set
k
=
k
+
1
{\displaystyle k=k+1}
, Otherwise, set
S
o
=
S
{\displaystyle S^{o}=S}
and stop the repetition.
If the optimal set
S
o
{\displaystyle S^{o}}
is empty then do not bet at all. If the set
S
o
{\displaystyle S^{o}}
of optimal outcomes is not empty, then the optimal fraction
f
k
o
{\displaystyle f_{k}^{o}}
to bet on
k
{\displaystyle k}
-th outcome may be calculated from this formula:
f
i
=
p
i
−
β
i
∑
k
∉
S
p
k
(
D
−
∑
k
∈
S
β
k
)
.
{\displaystyle f_{i}=p_{i}-\beta _{i}{\frac {\sum _{k\notin S}p_{k}}{\left(D-\sum _{k\in S}\beta _{k}\right)}}.}
One may prove that
R
(
S
o
)
=
1
−
∑
i
∈
S
o
f
i
o
{\displaystyle R(S^{o})=1-\sum _{i\in S^{o}}{f_{i}^{o}}}
where the right hand-side is the reserve rate. Therefore, the requirement
e
r
k
=
D
β
k
p
k
>
R
(
S
)
{\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)}
may be interpreted as follows:
k
{\displaystyle k}
-th outcome is included in the set
S
o
{\displaystyle S^{o}}
of optimal outcomes if and only if its expected revenue rate is greater than the reserve rate. The formula for the optimal fraction
f
k
o
{\displaystyle f_{k}^{o}}
may be interpreted as the excess of the expected revenue rate of
k
{\displaystyle k}
-th horse over the reserve rate divided by the revenue after deduction of the track take when
k
{\displaystyle k}
-th horse wins or as the excess of the probability of
k
{\displaystyle k}
-th horse winning over the reserve rate divided by revenue after deduction of the track take when
k
{\displaystyle k}
-th horse wins. The binary growth exponent is
G
o
=
∑
i
∈
S
p
i
log
2
(
e
r
i
)
+
(
1
−
∑
i
∈
S
p
i
)
log
2
(
R
(
S
o
)
)
,
{\displaystyle G^{o}=\sum _{i\in S}{p_{i}\log _{2}(er_{i})}+\left(1-\sum _{i\in S}{p_{i}}\right)\log _{2}(R(S^{o})),}
and the doubling time is
T
d
=
1
G
o
.
{\displaystyle T_{d}={\frac {1}{G^{o}}}.}
This method of selection of optimal bets may be applied also when probabilities
p
k
{\displaystyle p_{k}}
are known only for several most promising outcomes, while the remaining outcomes have no chance to win. In this case it must be that
∑
i
p
i
<
1
,
{\displaystyle \sum _{i}{p_{i}}<1,}
and
∑
i
β
i
<
1
{\displaystyle \sum _{i}{\beta _{i}}<1}
.
= Stock investments
=The second-order Taylor polynomial can be used as a good approximation of the main criterion. Primarily, it is useful for stock investment, where the fraction devoted to investment is based on simple characteristics that can be easily estimated from existing historical data – expected value and variance. This approximation may offer similar results as the original criterion, but in some cases the solution obtained may be infeasible.
For single assets (stock, index fund, etc.), and a risk-free rate, it is easy to obtain the optimal fraction to invest through geometric Brownian motion.
The stochastic differential equation governing the evolution of a lognormally distributed asset
S
{\displaystyle S}
at time
t
{\displaystyle t}
(
S
t
{\displaystyle S_{t}}
) is
d
S
t
/
S
t
=
μ
d
t
+
σ
d
W
t
{\displaystyle dS_{t}/S_{t}=\mu dt+\sigma dW_{t}}
whose solution is
S
t
=
S
0
exp
(
(
μ
−
σ
2
2
)
t
+
σ
W
t
)
{\displaystyle S_{t}=S_{0}\exp \left(\left(\mu -{\frac {\sigma ^{2}}{2}}\right)t+\sigma W_{t}\right)}
where
W
t
{\displaystyle W_{t}}
is a Wiener process, and
μ
{\displaystyle \mu }
(percentage drift) and
σ
{\displaystyle \sigma }
(the percentage volatility) are constants. Taking expectations of the logarithm:
E
log
(
S
t
)
=
log
(
S
0
)
+
(
μ
−
σ
2
2
)
t
.
{\displaystyle \mathbb {E} \log(S_{t})=\log(S_{0})+\left(\mu -{\frac {\sigma ^{2}}{2}}\right)t.}
Then the expected log return
R
s
{\displaystyle R_{s}}
is
R
s
=
(
μ
−
σ
2
2
)
.
{\displaystyle R_{s}=\left(\mu -{\frac {\sigma ^{2}}{2}}\,\right).}
Consider a portfolio made of an asset
S
{\displaystyle S}
and a bond paying risk-free rate
r
{\displaystyle r}
, with fraction
f
{\displaystyle f}
invested in
S
{\displaystyle S}
and
(
1
−
f
)
{\displaystyle (1-f)}
in the bond. The aforementioned equation for
d
S
t
{\displaystyle dS_{t}}
must be modified by this fraction, ie
d
S
t
′
=
f
d
S
t
{\displaystyle dS_{t}'=fdS_{t}}
, with associated solution
S
t
′
=
S
0
′
exp
(
(
f
μ
−
(
f
σ
)
2
2
)
t
+
f
σ
W
t
)
{\displaystyle S'_{t}=S'_{0}\exp \left(\left(f\mu -{\frac {(f\sigma )^{2}}{2}}\right)t+f\sigma W_{t}\right)}
the expected one-period return is given by
E
(
[
S
1
′
S
0
′
−
1
]
+
(
1
−
f
)
r
)
=
E
(
[
exp
(
(
f
μ
−
(
f
σ
)
2
2
)
+
f
σ
W
1
)
−
1
]
)
+
(
1
−
f
)
r
{\displaystyle \mathbb {E} {\left(\left[{\frac {S'_{1}}{S'_{0}}}-1\right]+(1-f)r\right)}=\mathbb {E} {\left(\left[\exp \left(\left(f\mu -{\frac {(f\sigma )^{2}}{2}}\right)+f\sigma W_{1}\right)-1\right]\right)}+(1-f)r}
For small
μ
{\displaystyle \mu }
,
σ
{\displaystyle \sigma }
, and
W
t
{\displaystyle W_{t}}
, the solution can be expanded to first order to yield an approximate increase in wealth
G
(
f
)
=
f
μ
−
(
f
σ
)
2
2
+
(
1
−
f
)
r
.
{\displaystyle G(f)=f\mu -{\frac {(f\sigma )^{2}}{2}}+(1-f)\ r.}
Solving
max
(
G
(
f
)
)
{\displaystyle \max(G(f))}
we obtain
f
∗
=
μ
−
r
σ
2
.
{\displaystyle f^{*}={\frac {\mu -r}{\sigma ^{2}}}.}
f
∗
{\displaystyle f^{*}}
is the fraction that maximizes the expected logarithmic return, and so, is the Kelly fraction.
Thorp arrived at the same result but through a different derivation.
Remember that
μ
{\displaystyle \mu }
is different from the asset log return
R
s
{\displaystyle R_{s}}
. Confusing this is a common mistake made by websites and articles talking about the Kelly Criterion.
For multiple assets, consider a market with
n
{\displaystyle n}
correlated stocks
S
k
{\displaystyle S_{k}}
with stochastic returns
r
k
{\displaystyle r_{k}}
,
k
=
1
,
…
,
n
,
{\displaystyle k=1,\dots ,n,}
and a riskless bond with return
r
{\displaystyle r}
. An investor puts a fraction
u
k
{\displaystyle u_{k}}
of their capital in
S
k
{\displaystyle S_{k}}
and the rest is invested in the bond. Without loss of generality, assume that investor's starting capital is equal to 1.
According to the Kelly criterion one should maximize
E
[
ln
(
(
1
+
r
)
+
∑
k
=
1
n
u
k
(
r
k
−
r
)
)
]
.
{\displaystyle \mathbb {E} \left[\ln \left((1+r)+\sum \limits _{k=1}^{n}u_{k}(r_{k}-r)\right)\right].}
Expanding this with a Taylor series around
u
0
→
=
(
0
,
…
,
0
)
{\displaystyle {\vec {u_{0}}}=(0,\ldots ,0)}
we obtain
E
[
ln
(
1
+
r
)
+
∑
k
=
1
n
u
k
(
r
k
−
r
)
1
+
r
−
1
2
∑
k
=
1
n
∑
j
=
1
n
u
k
u
j
(
r
k
−
r
)
(
r
j
−
r
)
(
1
+
r
)
2
]
.
{\displaystyle \mathbb {E} \left[\ln(1+r)+\sum \limits _{k=1}^{n}{\frac {u_{k}(r_{k}-r)}{1+r}}-{\frac {1}{2}}\sum \limits _{k=1}^{n}\sum \limits _{j=1}^{n}u_{k}u_{j}{\frac {(r_{k}-r)(r_{j}-r)}{(1+r)^{2}}}\right].}
Thus we reduce the optimization problem to quadratic programming and the unconstrained solution
is
u
⋆
→
=
(
1
+
r
)
(
Σ
^
)
−
1
(
r
→
^
−
r
)
{\displaystyle {\vec {u^{\star }}}=(1+r)({\widehat {\Sigma }})^{-1}({\widehat {\vec {r}}}-r)}
where
r
→
^
{\displaystyle {\widehat {\vec {r}}}}
and
Σ
^
{\displaystyle {\widehat {\Sigma }}}
are the vector of means and the matrix of second mixed noncentral moments of the excess returns.
There is also a numerical algorithm for the fractional Kelly strategies and for the optimal solution under no leverage and no short selling constraints.
Bernoulli
In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets or investments, one should choose that with the highest geometric mean of outcomes. This is mathematically equivalent to the Kelly criterion, although the motivation is different (Bernoulli wanted to resolve the St. Petersburg paradox).
An English translation of the Bernoulli article was not published until 1954, but the work was well known among mathematicians and economists.
Criticism
Although the Kelly strategy's promise of doing better than any other strategy in the long run seems compelling, some economists have argued strenuously against it, mainly because an individual's specific investing constraints may override the desire for optimal growth rate. The conventional alternative is expected utility theory which says bets should be sized to maximize the expected utility of the outcome (to an individual with logarithmic utility, the Kelly bet maximizes expected utility, so there is no conflict; moreover, Kelly's original paper clearly states the need for a utility function in the case of gambling games which are played finitely many times). Even Kelly supporters usually argue for fractional Kelly (betting a fixed fraction of the amount recommended by Kelly) for a variety of practical reasons, such as wishing to reduce volatility, or protecting against non-deterministic errors in their advantage (edge) calculations. In colloquial terms, the Kelly criterion requires accurate probability values, which isn't always possible for real-world event outcomes. When a gambler overestimates their true probability of winning, the criterion value calculated will diverge from the optimal, increasing the risk of ruin.
Kelly formula can be thought as 'time diversification', which is taking equal risk during different sequential time periods (as opposed to taking equal risk in different assets for asset diversification). There is clearly a difference between time diversification and asset diversification, which was raised by Paul A. Samuelson. There is also a difference between ensemble-averaging (utility calculation) and time-averaging (Kelly multi-period betting over a single time path in real life). The debate was renewed by envoking ergodicity breaking. Yet the difference between ergodicity breaking and Knightian uncertainty should be recognized.
See also
Risk of ruin
Gambling and information theory
Proebsting's paradox
Merton's portfolio problem
References
External links
Kata Kunci Pencarian:
- When a Man's a Man
- In the Mood for Love
- Certain Women (film)
- Barack Obama
- Scarlett Johansson
- Yi Yi
- Edward Yang
- Agnès Varda
- Aseksualitas
- Jodie Foster
- Kelly criterion
- John Larry Kelly Jr.
- Card counting
- Edward O. Thorp
- Kelly
- Sharpe ratio
- List of paradoxes
- Intertemporal portfolio choice
- Handicapping
- Betting strategy