- Source: Monotone comparative statics
Monotone comparative statics is a sub-field of comparative statics that focuses on the conditions under which endogenous variables undergo monotone changes (that is, either increasing or decreasing) when there is a change in the exogenous parameters. Traditionally, comparative results in economics are obtained using the Implicit Function Theorem, an approach that requires the concavity and differentiability of the objective function as well as the interiority and uniqueness of the optimal solution. The methods of monotone comparative statics typically dispense with these assumptions. It focuses on the main property underpinning monotone comparative statics, which is a form of complementarity between the endogenous variable and exogenous parameter. Roughly speaking, a maximization problem displays complementarity if a higher value of the exogenous parameter increases the marginal return of the endogenous variable. This guarantees that the set of solutions to the optimization problem is increasing with respect to the exogenous parameter.
Basic results
= Motivation
=Let
X
⊆
R
{\displaystyle X\subseteq \mathbb {R} }
and let
f
(
⋅
;
s
)
:
X
→
R
{\displaystyle f(\cdot ;s):X\rightarrow \mathbb {R} }
be a family of functions parameterized by
s
∈
S
{\displaystyle s\in S}
, where
(
S
,
≥
S
)
{\displaystyle (S,\geq _{S})}
is a partially ordered set (or poset, for short). How does the correspondence
arg
max
x
∈
X
f
(
x
;
s
)
{\displaystyle \arg \max \limits _{x\in X}f(x;s)}
vary with
s
{\displaystyle s}
?
Standard comparative statics approach: Assume that set
X
{\displaystyle X}
is a compact interval and
f
(
⋅
;
s
)
{\displaystyle f(\cdot ;s)}
is a continuously differentiable, strictly quasiconcave function of
x
{\displaystyle x}
. If
x
¯
(
s
)
{\displaystyle {\bar {x}}(s)}
is the unique maximizer of
f
(
⋅
;
s
)
{\displaystyle f(\cdot ;s)}
, it suffices to show that
f
′
(
x
¯
(
s
)
;
s
′
)
≥
0
{\displaystyle f'({\bar {x}}(s);s')\geq 0}
for any
s
′
>
s
{\displaystyle s'>s}
, which guarantees that
x
¯
(
s
)
{\displaystyle {\bar {x}}(s)}
is increasing in
s
{\displaystyle s}
. This guarantees that the optimum has shifted to the right, i.e.,
x
¯
(
s
′
)
≥
x
¯
(
s
)
{\displaystyle {\bar {x}}(s')\geq {\bar {x}}(s)}
. This approach makes various assumptions, most notably the quasiconcavity of
f
(
⋅
;
s
)
{\displaystyle f(\cdot ;s)}
.
= One-dimensional optimization problems
=While it is clear what it means for a unique optimal solution to be increasing, it is not immediately clear what it means for the correspondence
arg
max
x
∈
X
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in X}f(x;s)}
to be increasing in
s
{\displaystyle s}
. The standard definition adopted by the literature is the following.
Definition (strong set order): Let
Y
{\displaystyle Y}
and
Y
′
{\displaystyle Y'}
be subsets of
R
{\displaystyle \mathbb {R} }
. Set
Y
′
{\displaystyle Y'}
dominates
Y
{\displaystyle Y}
in the strong set order (
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
) if for any
x
′
{\displaystyle x'}
in
Y
′
{\displaystyle Y'}
and
x
{\displaystyle x}
in
Y
{\displaystyle Y}
, we have
max
{
x
′
,
x
}
{\displaystyle \max\{x',x\}}
in
Y
′
{\displaystyle Y'}
and
min
{
x
′
,
x
}
{\displaystyle \min\{x',x\}}
in
Y
{\displaystyle Y}
.
In particular, if
Y
:=
{
x
}
{\displaystyle Y:=\{x\}}
and
Y
′
:=
{
x
′
}
{\displaystyle Y':=\{x'\}}
, then
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
if and only if
x
′
≥
x
{\displaystyle x'\geq x}
. The correspondence
arg
max
x
∈
X
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in X}f(x;s)}
is said to be increasing if
arg
max
x
∈
X
f
(
x
;
s
′
)
≥
S
S
O
arg
max
x
∈
X
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in X}f(x;s')\geq _{SSO}\arg \max _{x\in X}f(x;s)}
whenever
s
′
>
S
s
{\displaystyle s'>_{S}s}
.
The notion of complementarity between exogenous and endogenous variables is formally captured by single crossing differences.
Definition (single crossing function): Let
ϕ
:
S
→
R
{\displaystyle \phi :S\rightarrow \mathbb {R} }
. Then
ϕ
{\displaystyle \phi }
is a single crossing function if for any
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
we have
ϕ
(
s
)
≥
(
>
)
0
⇒
ϕ
(
s
′
)
≥
(
>
)
0
{\displaystyle \phi (s)\geq (>)\ 0\ \Rightarrow \ \phi (s')\geq (>)\ 0}
.
Definition (single crossing differences): The family of functions
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
,
f
:
X
×
S
→
R
{\displaystyle f:X\times S\to \mathbb {R} }
, obey single crossing differences (or satisfy the single crossing property) if for all
x
′
≥
x
{\displaystyle x'\geq x}
, function
Δ
(
s
)
=
f
(
x
′
;
s
)
−
f
(
x
;
s
)
{\displaystyle \Delta (s)=f(x';s)-f(x;s)}
is a single crossing function.
Obviously, an increasing function is a single crossing function and, if
Δ
(
s
)
{\displaystyle \Delta (s)}
is increasing in
s
{\displaystyle s}
(in the above definition, for any
x
′
>
x
{\displaystyle x'>x}
), we say that
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obey increasing differences. Unlike increasing differences, single crossing differences is an ordinal property, i.e., if
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obey single crossing differences, then so do
{
g
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{g(\cdot ;s)\}_{s\in S}}
, where
g
(
x
;
s
)
=
H
(
f
(
x
;
s
)
;
s
)
{\displaystyle g(x;s)=H(f(x;s);s)}
for some function
H
(
⋅
;
s
)
{\displaystyle H(\cdot ;s)}
that is strictly increasing in
x
{\displaystyle x}
.
Theorem 1: Define
F
Y
(
s
)
:=
arg
max
x
∈
Y
f
(
x
;
s
)
{\displaystyle F_{Y}(s):=\arg \max _{x\in Y}f(x;s)}
. The family
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obey single crossing differences if and only if for all
Y
⊆
X
{\displaystyle Y\subseteq X}
, we have
F
Y
(
s
′
)
≥
S
S
O
F
Y
(
s
)
{\displaystyle F_{Y}(s')\geq _{SSO}F_{Y}(s)}
for any
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
.
Proof: Assume
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
and
x
∈
F
Y
(
s
)
{\displaystyle x\in F_{Y}(s)}
, and
x
′
∈
F
Y
(
s
′
)
{\displaystyle x'\in F_{Y}(s')}
. We have to show that
max
{
x
′
,
x
}
∈
F
Y
(
s
′
)
{\displaystyle \max\{x',x\}\in F_{Y}(s')}
and
min
{
x
′
,
x
}
∈
F
Y
(
s
)
{\displaystyle \min\{x',x\}\in F_{Y}(s)}
. We only need to consider the case where
x
>
x
′
{\displaystyle x>x'}
. Since
x
∈
F
Y
(
s
)
{\displaystyle x\in F_{Y}(s)}
, we obtain
f
(
x
;
s
)
≥
f
(
x
′
;
s
)
{\displaystyle f(x;s)\geq f(x';s)}
, which guarantees that
x
∈
F
Y
′
(
s
′
)
{\displaystyle x\in F_{Y'}(s')}
. Furthermore,
f
(
x
;
s
)
=
f
(
x
′
;
s
)
{\displaystyle f(x;s)=f(x';s)}
so that
x
′
∈
F
Y
(
s
)
{\displaystyle x'\in F_{Y}(s)}
. If not,
f
(
x
;
s
)
>
f
(
x
′
;
s
)
{\displaystyle f(x;s)>f(x';s)}
which implies (by single crossing differences) that
f
(
x
;
s
′
)
>
f
(
x
′
;
s
′
)
{\displaystyle f(x;s')>f(x';s')}
, contradicting the optimality of
x
′
{\displaystyle x'}
at
s
′
{\displaystyle s'}
. To show the necessity of single crossing differences, set
Y
:=
{
x
,
x
¯
}
{\displaystyle Y:=\{x,{\bar {x}}\}}
, where
x
¯
≥
x
{\displaystyle {\bar {x}}\geq x}
. Then
F
Y
(
s
′
)
≥
S
S
O
F
Y
(
s
)
{\displaystyle F_{Y}(s')\geq _{SSO}F_{Y}(s)}
for any
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
guarantees that, if
f
(
x
¯
;
s
)
≥
(
>
)
f
(
x
;
s
)
{\displaystyle f({\bar {x}};s)\geq (>)\ f(x;s)}
, then
f
(
x
¯
;
s
′
)
≥
(
>
)
f
(
x
;
s
′
)
{\displaystyle f({\bar {x}};s')\geq (>)\ f(x;s')}
. Q.E.D.
Application (monopoly output and changes in costs): A monopolist chooses
x
∈
X
⊆
R
+
{\displaystyle x\in X\subseteq \mathbb {R} _{+}}
to maximise its profit
Π
(
x
;
−
c
)
=
x
P
(
x
)
−
c
x
{\displaystyle \Pi (x;-c)=xP(x)-cx}
, where
P
:
R
+
→
R
+
{\displaystyle P:\mathbb {R} _{+}\to \mathbb {R} _{+}}
is the inverse demand function and
c
≥
0
{\displaystyle c\geq 0}
is the constant marginal cost. Note that
{
Π
(
⋅
,
−
c
)
}
(
−
c
)
∈
R
−
{\displaystyle \{\Pi (\cdot ,-c)\}_{(-c)\in \mathbb {R} _{-}}}
obey single crossing differences. Indeed, take any
x
′
≥
x
{\displaystyle x'\geq x}
and assume that
x
′
P
(
x
′
)
−
c
x
′
≥
(
>
)
x
P
(
x
)
−
c
x
{\displaystyle x'P(x')-cx'\geq (>)\ xP(x)-cx}
; for any
c
′
{\displaystyle c'}
such that
(
−
c
′
)
≥
(
−
c
)
{\displaystyle (-c')\geq (-c)}
, we obtain
x
′
P
(
x
′
)
−
c
′
x
′
≥
(
>
)
x
P
(
x
)
−
c
′
x
{\displaystyle x'P(x')-c'x'\geq (>)\ xP(x)-c'x}
. By Theorem 1, the profit-maximizing output decreases as the marginal cost of output increases, i.e., as
(
−
c
)
{\displaystyle (-c)}
decreases.
= Interval dominance order
=Single crossing differences is not a necessary condition for the optimal solution to be increasing with respect to a parameter. In fact, the condition is necessary only for
arg
max
x
∈
Y
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in Y}f(x;s)}
to be increasing in
s
{\displaystyle s}
for any
Y
⊂
X
{\displaystyle Y\subset X}
. Once the sets are restricted to a narrower class of subsets of
X
{\displaystyle X}
, the single crossing differences condition is no longer necessary.
Definition (Interval): Let
X
⊆
R
{\displaystyle X\subseteq \mathbb {R} }
. A set
Y
⊆
X
{\displaystyle Y\subseteq X}
is an interval of
X
{\displaystyle X}
if, whenever
x
∗
{\displaystyle x^{*}}
and
x
∗
∗
{\displaystyle x^{**}}
are in
Y
{\displaystyle Y}
, then any
x
∈
X
{\displaystyle x\in X}
such that
x
∗
≤
x
≤
x
∗
∗
{\displaystyle x^{*}\leq x\leq x^{**}}
is also in
Y
{\displaystyle Y}
.
For example, if
X
=
N
{\displaystyle X=\mathbb {N} }
, then
{
1
,
2
,
3
,
4
}
{\displaystyle \{1,2,3,4\}}
is an interval of
X
{\displaystyle X}
but not
{
1
,
2
,
4
}
{\displaystyle \{1,2,4\}}
. Denote
[
x
∗
,
x
∗
∗
]
=
{
x
∈
X
|
x
∗
≤
x
≤
x
∗
∗
}
{\displaystyle [x^{*},x^{**}]=\{x\in X\ |\ x^{*}\leq x\leq x^{**}\}}
.
Definition (Interval Dominance Order): The family
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obey the interval dominance order (IDO) if for any
x
″
>
x
′
{\displaystyle x''>x'}
and
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
, such that
f
(
x
″
;
s
)
≥
f
(
x
;
s
)
{\displaystyle f(x'';s)\geq f(x;s)}
, for all
x
∈
[
x
′
,
x
″
]
{\displaystyle x\in [x',x'']}
, we have
f
(
x
″
;
s
)
≥
(
>
)
f
(
x
′
;
s
)
⇒
f
(
x
″
;
s
′
)
≥
(
>
)
f
(
x
′
;
s
′
)
{\displaystyle f(x'';s)\geq (>)\ f(x';s)\ \Rightarrow \ f(x'';s')\geq (>)\ f(x';s')}
.
Like single crossing differences, the interval dominance order (IDO) is an ordinal property. An example of an IDO family is a family of quasiconcave functions
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
where
arg
max
x
∈
X
f
(
x
,
s
)
{\displaystyle \arg \max _{x\in X}f(x,s)}
increasing in
s
{\displaystyle s}
. Such a family need not obey single crossing differences.
A function
f
:
X
×
S
→
R
{\displaystyle f:X\times S\to \mathbb {R} }
is regular if
arg
max
x
∈
[
x
∗
,
x
∗
∗
]
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in [x^{*},x^{**}]}f(x;s)}
is non-empty for any
x
∗
∗
≥
x
∗
{\displaystyle x^{**}\geq x^{*}}
, where
[
x
∗
,
x
∗
∗
]
{\displaystyle [x^{*},x^{**}]}
denotes the interval
{
x
∈
X
|
x
∗
≤
x
≤
x
∗
∗
}
{\displaystyle \{x\in X\ |\ x^{*}\leq x\leq x^{**}\}}
.
Theorem 2: Denote
F
Y
(
s
)
:=
arg
max
x
∈
Y
f
(
x
;
s
)
{\displaystyle F_{Y}(s):=\arg \max _{x\in Y}f(x;s)}
. A family of regular functions
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obeys the interval dominance order if and only if
F
Y
(
s
)
{\displaystyle F_{Y}(s)}
is increasing in
s
{\displaystyle s}
for all intervals
Y
⊆
X
{\displaystyle Y\subseteq X}
.
Proof: To show the sufficiency of IDO, take any two
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
, and assume that
x
′
∈
F
Y
(
s
)
{\displaystyle x'\in F_{Y}(s)}
and
x
″
∈
F
Y
(
s
′
)
{\displaystyle x''\in F_{Y}(s')}
. We only need to consider the case where
x
′
>
x
″
{\displaystyle x'>x''}
. By definition
f
(
x
′
;
s
)
≥
f
(
x
;
s
)
{\displaystyle f(x';s)\geq f(x;s)}
, for all
x
∈
[
x
″
,
x
′
]
⊂
Y
{\displaystyle x\in [x'',x']\subset Y}
. Moreover, by IDO we have
f
(
x
′
;
s
′
)
≥
f
(
x
″
;
s
′
)
{\displaystyle f(x';s')\geq f(x'';s')}
. Therefore,
x
′
∈
F
Y
(
s
′
)
{\displaystyle x'\in F_{Y}(s')}
. Furthermore, it must be that
f
(
x
′
;
s
)
=
f
(
x
″
;
s
)
{\displaystyle f(x';s)=f(x'';s)}
. Otherwise, i.e., if
f
(
x
′
;
s
)
>
f
(
x
″
;
s
)
{\displaystyle f(x';s)>f(x'';s)}
, then by IDO we have
f
(
x
′
;
s
′
)
>
f
(
x
″
;
s
′
)
{\displaystyle f(x';s')>f(x'';s')}
, which contradicts that
x
″
∈
F
Y
(
s
′
)
{\displaystyle x''\in F_{Y}(s')}
. To show the necessity of IDO, assume that there is an interval
[
x
″
,
x
′
]
{\displaystyle [x'',x']}
such that
f
(
x
′
;
s
)
≥
f
(
x
;
s
)
{\displaystyle f(x';s)\geq f(x;s)}
for all
x
∈
[
x
″
,
x
′
]
{\displaystyle x\in [x'',x']}
. This means that
x
′
∈
arg
max
x
∈
[
x
″
,
x
′
]
f
(
x
;
s
)
{\displaystyle x'\in \arg \max _{x\in [x'',x']}f(x;s)}
. There are two possible violations of IDO. One possibility is that
f
(
x
″
;
s
′
)
>
f
(
x
′
;
s
′
)
{\displaystyle f(x'';s')>f(x';s')}
. In this case, by the regularity of
f
(
⋅
;
s
′
)
{\displaystyle f(\cdot ;s')}
, the set
arg
max
x
∈
[
x
″
,
x
′
]
f
(
x
;
s
′
)
{\displaystyle \arg \max _{x\in [x'',x']}f(x;s')}
is non-empty but does not contain
x
′
{\displaystyle x'}
which is impossible since
arg
max
x
∈
[
x
″
,
x
′
]
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in [x'',x']}f(x;s)}
increases in
s
{\displaystyle s}
. Another possible violation of IDO occurs if
f
(
x
″
;
s
′
)
=
f
(
x
′
;
s
′
)
{\displaystyle f(x'';s')=f(x';s')}
but
f
(
x
″
;
s
)
<
f
(
x
′
;
s
)
{\displaystyle f(x'';s)
. In this case, the set
arg
max
x
∈
[
x
″
,
x
′
]
f
(
x
;
s
′
)
{\displaystyle \arg \max _{x\in [x'',x']}f(x;s')}
either contains
x
″
{\displaystyle x''}
, which is not possible since
arg
max
x
∈
[
x
″
,
x
′
]
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in [x'',x']}f(x;s)}
increases in
s
{\displaystyle s}
(note that in this case
x
″
∉
arg
max
x
∈
[
x
″
,
x
′
]
f
(
x
;
s
′
)
{\displaystyle x''\not \in \arg \max _{x\in [x'',x']}f(x;s')}
) or it does not contain
x
′
{\displaystyle x'}
, which also violates monotonicity of
arg
max
x
∈
[
x
″
,
x
′
]
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in [x'',x']}f(x;s)}
. Q.E.D.
The next result gives useful sufficient conditions for single crossing differences and IDO.
Proposition 1: Let
X
{\displaystyle X}
be an interval of
R
{\displaystyle \mathbb {R} }
and
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
be a family of continuously differentiable functions. (i) If, for any
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
, there exists a number
α
>
0
{\displaystyle \alpha >0}
such that
f
′
(
x
;
s
′
)
≥
α
f
′
(
x
;
s
)
{\displaystyle f'(x;s')\geq \alpha f'(x;s)}
for all
x
∈
X
{\displaystyle x\in X}
, then
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obey single crossing differences. (ii) If, for any
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
, there exists a nondecreasing, strictly positive function
α
:
X
→
R
{\displaystyle \alpha :X\rightarrow \mathbb {R} }
such that
f
′
(
x
;
s
′
)
≥
α
(
x
)
f
′
(
x
;
s
)
{\displaystyle f'(x;s')\geq \alpha (x)f'(x;s)}
for all
x
∈
X
{\displaystyle x\in X}
, then
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obey IDO.
Application (Optimal stopping problem): At each moment in time, agent gains profit of
π
(
t
)
{\displaystyle \pi (t)}
, which can be positive or negative. If agent decides to stop at time
x
{\displaystyle x}
, the present value of his accumulated profit is
V
(
x
;
−
r
)
=
∫
0
x
e
−
r
t
π
(
t
)
d
t
,
{\displaystyle V(x;-r)=\int _{0}^{x}e^{-rt}\pi (t)dt,}
where
r
>
0
{\displaystyle r>0}
is the discount rate. Since
V
′
(
x
;
−
r
)
=
e
−
r
x
π
(
x
)
{\displaystyle V'(x;-r)=e^{-rx}\pi (x)}
, the function
V
{\displaystyle V}
has many turning points and they do not vary with the discount rate. We claim that the optimal stopping time is decreasing in
r
{\displaystyle r}
, i.e., if
r
′
>
r
>
0
{\displaystyle r'>r>0}
then
arg
max
x
≥
0
V
(
x
;
−
r
)
≥
S
S
O
arg
max
x
≥
0
V
(
x
;
−
r
′
)
{\displaystyle \arg \max _{x\geq 0}V(x;-r)\geq _{SSO}\arg \max _{x\geq 0}V(x;-r')}
. Take any
r
′
<
r
{\displaystyle r'
. Then,
V
′
(
x
;
−
r
)
=
e
−
r
x
π
(
x
)
=
e
(
r
′
−
r
)
x
V
′
(
x
;
−
r
′
)
.
{\displaystyle V'(x;-r)=e^{-rx}\pi (x)=e^{(r'-r)x}V'(x;-r').}
Since
α
(
x
)
=
e
(
r
′
−
r
)
x
{\displaystyle \alpha (x)=e^{(r'-r)x}}
is positive and increasing, Proposition 1 says that
{
V
(
⋅
;
−
r
)
}
(
−
r
)
<
0
{\displaystyle \{V(\cdot ;-r)\}_{(-r)<0}}
obey IDO and, by Theorem 2, the set of optimal stopping times is decreasing.
= Multi-dimensional optimization problems
=The above results can be extended to a multi-dimensional setting. Let
(
X
,
≥
X
)
{\displaystyle (X,\geq _{X})}
be a lattice. For any two
x
{\displaystyle x}
,
x
′
{\displaystyle x'}
in
X
{\displaystyle X}
, we denote their supremum (or least upper bound, or join) by
x
′
∨
x
{\displaystyle x'\vee x}
and their infimum (or greatest lower bound, or meet) by
x
′
∧
x
{\displaystyle x'\wedge x}
.
Definition (Strong Set Order): Let
(
X
,
≥
X
)
{\displaystyle (X,\geq _{X})}
be a lattice and
Y
{\displaystyle Y}
,
Y
′
{\displaystyle Y'}
be subsets of
X
{\displaystyle X}
. We say that
Y
′
{\displaystyle Y'}
dominates
Y
{\displaystyle Y}
in the strong set order (
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
) if for any
x
′
{\displaystyle x'}
in
Y
′
{\displaystyle Y'}
and
x
{\displaystyle x}
in
Y
{\displaystyle Y}
, we have
x
∨
x
′
{\displaystyle x\vee x'}
in
Y
′
{\displaystyle Y'}
and
x
∧
x
′
{\displaystyle x\wedge x'}
in
Y
{\displaystyle Y}
.
Examples of the strong set order in higher dimensions.
Let
X
=
R
{\displaystyle X=\mathbb {R} }
and
Y
:=
[
a
,
b
]
{\displaystyle Y:=[a,b]}
,
Y
′
:=
[
a
′
,
b
′
]
{\displaystyle Y':=[a',b']}
be some closed intervals in
X
{\displaystyle X}
. Clearly
(
X
,
≥
)
{\displaystyle (X,\geq )}
, where
≥
{\displaystyle \geq }
is the standard ordering on
R
{\displaystyle \mathbb {R} }
, is a lattice. Therefore, as it was shown in the previous section
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
if and only if
a
′
≥
a
{\displaystyle a'\geq a}
and
b
′
≥
b
{\displaystyle b'\geq b}
;
Let
X
=
R
n
{\displaystyle X=\mathbb {R} ^{n}}
and
Y
{\displaystyle Y}
,
Y
′
⊂
X
{\displaystyle Y'\subset X}
be some hyperrectangles. That is, there exist some vectors
a
{\displaystyle a}
,
b
{\displaystyle b}
,
a
′
{\displaystyle a'}
,
b
′
{\displaystyle b'}
in
X
{\displaystyle X}
such that
Y
:=
{
x
∈
X
|
a
≤
x
≤
b
}
{\displaystyle Y:=\{x\in X\ |\ a\leq x\leq b\}}
and
Y
′
:=
{
x
∈
X
|
a
′
≤
x
≤
b
′
}
{\displaystyle Y':=\{x\in X\ |\ a'\leq x\leq b'\}}
, where
≥
{\displaystyle \geq }
is the natural, coordinate-wise ordering on
R
n
{\displaystyle \mathbb {R} ^{n}}
. Note that
(
X
,
≥
)
{\displaystyle (X,\geq )}
is a lattice. Moreover,
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
if and only if
a
′
≥
a
{\displaystyle a'\geq a}
and
b
′
≥
b
{\displaystyle b'\geq b}
;
Let
(
X
,
≥
X
)
{\displaystyle (X,\geq _{X})}
be a space of all probability distributions with support being a subset of
R
{\displaystyle \mathbb {R} }
, endowed with the first order stochastic dominance order
≥
X
{\displaystyle \geq _{X}}
. Note that
(
X
,
≥
X
)
{\displaystyle (X,\geq _{X})}
is a lattice. Let
Y
:=
Δ
(
[
a
,
b
]
)
{\displaystyle Y:=\Delta ([a,b])}
,
Y
′
:=
Δ
(
[
a
′
,
b
′
]
)
{\displaystyle Y':=\Delta ([a',b'])}
denote sets of probability distributions with support
[
a
,
b
]
{\displaystyle [a,b]}
and
[
a
′
,
b
′
]
{\displaystyle [a',b']}
respectively. Then,
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
with respect to
≥
X
{\displaystyle \geq _{X}}
if and only if
a
′
≥
a
{\displaystyle a'\geq a}
and
b
′
≥
b
{\displaystyle b'\geq b}
.
Definition (Quasisupermodular function): Let
(
X
,
≥
X
)
{\displaystyle (X,\geq _{X})}
be a lattice. The function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
is quasisupermodular (QSM) if
f
(
x
)
≥
(
>
)
f
(
x
∧
x
′
)
⇒
f
(
x
∨
x
′
)
≥
(
>
)
f
(
x
′
)
.
{\displaystyle f(x)\geq (>)\ f(x\wedge x')\ \Rightarrow \ f(x\vee x')\geq (>)\ f(x').}
The function
f
{\displaystyle f}
is said to be a supermodular function if
f
(
x
∨
x
′
)
−
f
(
x
′
)
≥
f
(
x
)
−
f
(
x
∧
x
′
)
.
{\displaystyle f(x\vee x')-f(x')\geq f(x)-f(x\wedge x').}
Every supermodular function is quasisupermodular. As in the case of single crossing differences, and unlike supermodularity, quasisupermodularity is an ordinal property. That is, if function
f
{\displaystyle f}
is quasisupermodular, then so is function
g
:=
H
∘
f
{\displaystyle g:=H\circ f}
, where
H
{\displaystyle H}
is some strictly increasing function.
Theorem 3: Let
(
X
,
≥
X
)
{\displaystyle (X,\geq _{X})}
is a lattice,
(
S
,
≥
S
)
{\displaystyle (S,\geq _{S})}
a partially ordered set, and
Y
{\displaystyle Y}
,
Y
′
{\displaystyle Y'}
subsets of
X
{\displaystyle X}
. Given
f
:
X
×
S
→
R
{\displaystyle f:X\times S\to \mathbb {R} }
, we denote
arg
max
x
∈
Y
f
(
x
;
s
)
{\displaystyle \arg \max _{x\in Y}f(x;s)}
by
F
Y
(
s
)
{\displaystyle F_{Y}(s)}
. Then
F
Y
′
(
s
′
)
≥
S
S
O
F
Y
(
s
)
{\displaystyle F_{Y'}(s')\geq _{SSO}F_{Y}(s)}
for any
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
and
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
Proof:
(
⇐
)
{\displaystyle (\Leftarrow )}
. Let
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
,
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
, and
x
′
∈
F
Y
′
(
s
′
)
{\displaystyle x'\in F_{Y'}(s')}
,
x
∈
F
Y
(
s
)
{\displaystyle x\in F_{Y}(s)}
. Since
x
∈
F
Y
(
s
)
{\displaystyle x\in F_{Y}(s)}
and
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
, then
f
(
x
;
s
)
≥
f
(
x
′
∧
x
;
s
)
{\displaystyle f(x;s)\geq f(x'\wedge x;s)}
. By quasisupermodularity,
f
(
x
′
∨
x
;
s
)
≥
f
(
x
′
;
s
)
{\displaystyle f(x'\vee x;s)\geq f(x';s)}
, and by the single crossing differences,
f
(
x
′
∨
x
;
s
′
)
≥
f
(
x
′
;
s
′
)
{\displaystyle f(x'\vee x;s')\geq f(x';s')}
. Hence
x
′
∨
x
∈
F
Y
′
(
s
′
)
{\displaystyle x'\vee x\in F_{Y'}(s')}
. Now assume that
x
′
∧
x
∉
F
Y
(
s
)
{\displaystyle x'\wedge x\not \in F_{Y}(s)}
. Then
f
(
x
;
s
)
>
f
(
x
′
∧
x
;
s
)
{\displaystyle f(x;s)>f(x'\wedge x;s)}
. By quasisupermodularity,
f
(
x
′
∨
x
;
s
)
>
f
(
x
′
;
s
)
{\displaystyle f(x'\vee x;s)>f(x';s)}
, and by single crossing differences
f
(
x
′
∨
x
;
s
′
)
>
f
(
x
′
;
s
′
)
{\displaystyle f(x'\vee x;s')>f(x';s')}
. But this contradicts that
x
′
∈
F
Y
′
(
s
′
)
{\displaystyle x'\in F_{Y'}(s')}
. Hence,
x
′
∧
x
∈
F
Y
(
s
)
{\displaystyle x'\wedge x\in F_{Y}(s)}
.
(
⇒
)
{\displaystyle (\Rightarrow )}
. Set
Y
′
:=
{
x
′
,
x
′
∨
x
}
{\displaystyle Y':=\{x',x'\vee x\}}
and
Y
:=
{
x
,
x
′
∧
x
}
{\displaystyle Y:=\{x,x'\wedge x\}}
. Then,
Y
′
≥
S
S
O
Y
{\displaystyle Y'\geq _{SSO}Y}
and thus
F
Y
′
(
s
)
≥
S
S
O
F
Y
(
s
)
{\displaystyle F_{Y'}(s)\geq _{SSO}F_{Y}(s)}
, which guarantees that, if
f
(
x
;
s
)
≥
(
>
)
f
(
x
′
∧
x
;
s
)
{\displaystyle f(x;s)\geq (>)\ f(x'\wedge x;s)}
, then
f
(
x
′
∨
x
;
s
)
≥
(
>
)
f
(
x
′
;
s
)
{\displaystyle f(x'\vee x;s)\geq (>)\ f(x';s)}
. To show that single crossing differences also hold, set
Y
:=
{
x
,
x
¯
}
{\displaystyle Y:=\{x,{\bar {x}}\}}
, where
x
¯
≥
x
{\displaystyle {\bar {x}}\geq x}
. Then
F
Y
(
s
′
)
≥
S
S
O
F
Y
(
s
)
{\displaystyle F_{Y}(s')\geq _{SSO}F_{Y}(s)}
for any
s
′
≥
S
s
{\displaystyle s'\geq _{S}s}
guarantees that, if
f
(
x
¯
;
s
)
≥
(
>
)
f
(
x
;
s
)
{\displaystyle f({\bar {x}};s)\geq (>)\ f(x;s)}
, then
f
(
x
¯
;
s
′
)
≥
(
>
)
f
(
x
;
s
′
)
{\displaystyle f({\bar {x}};s')\geq (>)\ f(x;s')}
. Q.E.D.
Application (Production with multiple goods): Let
x
{\displaystyle x}
denote the vector of inputs (drawn from a sublattice
X
{\displaystyle X}
of
R
+
l
{\displaystyle \mathbb {R} _{+}^{l}}
) of a profit-maximizing firm,
p
∈
R
+
+
l
{\displaystyle p\in \mathbb {R} _{++}^{l}}
be the vector of input prices, and
V
{\displaystyle V}
the revenue function mapping input vector
x
{\displaystyle x}
to revenue (in
R
{\displaystyle \mathbb {R} }
). The firm's profit is
Π
(
x
;
p
)
=
V
(
x
)
−
p
⋅
x
{\displaystyle \Pi (x;p)=V(x)-p\cdot x}
. For any
x
′
{\displaystyle x'}
,
x
∈
X
{\displaystyle x\in X}
,
x
′
≥
x
{\displaystyle x'\geq x}
,
V
(
x
′
)
−
V
(
x
)
+
(
−
p
)
(
x
′
−
x
)
{\displaystyle V(x')-V(x)+(-p)(x'-x)}
is increasing in
(
−
p
)
{\displaystyle (-p)}
. Hence,
{
Π
(
⋅
;
p
)
}
p
∈
R
+
+
l
{\displaystyle \{\Pi (\cdot ;p)\}_{p\in \mathbb {R} _{++}^{l}}}
has increasing differences (and so it obeys single crossing differences). Moreover, if
V
{\displaystyle V}
is supermodular, then so is
Π
(
⋅
;
p
)
{\displaystyle \Pi (\cdot ;p)}
. Therefore, it is quasisupermodular and by Theorem 3,
arg
max
x
∈
X
Π
(
x
;
p
)
≥
S
S
O
arg
max
x
∈
X
Π
(
x
;
p
′
)
{\displaystyle \arg \max _{x\in X}\Pi (x;p)\geq _{SSO}\arg \max _{x\in X}\Pi (x;p')}
for
p
′
≥
p
{\displaystyle p'\geq p}
.
Constrained optimization problems
In some important economic applications, the relevant change in the constraint set cannot be easily understood as an increase with respect to the strong set order and so Theorem 3 cannot be easily applied. For example, consider a consumer who maximizes a utility function
u
:
X
→
R
{\displaystyle u:X\to \mathbb {R} }
subject to a budget constraint. At price
p
{\displaystyle p}
in
R
+
+
n
{\displaystyle \mathbb {R} _{++}^{n}}
and wealth
w
>
0
{\displaystyle w>0}
, his budget set is
B
(
p
,
w
)
=
{
x
∈
X
|
p
⋅
x
≤
w
}
{\displaystyle B(p,w)=\{x\in X\ |\ p\cdot x\leq w\}}
and his demand set at
(
p
,
w
)
{\displaystyle (p,w)}
is (by definition)
D
(
p
,
w
)
=
arg
max
x
∈
B
(
p
,
w
)
u
(
x
)
{\displaystyle D(p,w)=\arg \max _{x\in B(p,w)}u(x)}
. A basic property of consumer demand is normality, which means (in the case where demand is unique) that the demand of each good is increasing in wealth. Theorem 3 cannot be straightforwardly applied to obtain conditions for normality, because
B
(
p
,
w
′
)
≱
S
S
O
B
(
p
,
w
)
{\displaystyle B(p,w')\not \geq _{SSO}B(p,w)}
if
w
′
>
w
{\displaystyle w'>w}
(when
≥
S
S
O
{\displaystyle \geq _{SSO}}
is derived from the Euclidean order). In this case, the following result holds.
Theorem 4: Suppose
u
:
R
+
+
n
→
R
{\displaystyle u:\mathbb {R} _{++}^{n}\rightarrow \mathbb {R} }
is supermodular and concave. Then the demand correspondence is normal in the following sense: suppose
w
″
>
w
′
{\displaystyle w''>w'}
,
x
″
∈
D
(
p
,
w
″
)
{\displaystyle x''\in D(p,w'')}
and
x
′
∈
D
(
p
,
w
′
)
{\displaystyle x'\in D(p,w')}
; then there is
z
″
∈
D
(
p
,
w
″
)
{\displaystyle z''\in D(p,w'')}
and
z
′
∈
D
(
p
,
w
′
)
{\displaystyle z'\in D(p,w')}
such that
z
″
≥
x
′
{\displaystyle z''\geq x'}
and
x
″
≥
z
′
{\displaystyle x''\geq z'}
.
The supermodularity of
u
{\displaystyle u}
alone guarantees that, for any
x
{\displaystyle x}
and
y
{\displaystyle y}
,
u
(
x
∧
y
)
−
u
(
y
)
≥
u
(
x
)
−
u
(
x
∨
y
)
{\displaystyle u(x\wedge y)-u(y)\geq u(x)-u(x\vee y)}
. Note that the four points
x
{\displaystyle x}
,
y
{\displaystyle y}
,
x
∧
y
{\displaystyle x\wedge y}
, and
x
∨
y
{\displaystyle x\vee y}
form a rectangle in Euclidean space (in the sense that
x
∧
y
−
x
=
y
−
x
∨
y
{\displaystyle x\wedge y-x=y-x\vee y}
,
x
−
x
∨
y
=
x
∧
y
−
y
{\displaystyle x-x\vee y=x\wedge y-y}
, and
x
∧
y
−
x
{\displaystyle x\wedge y-x}
and
x
−
x
∨
y
{\displaystyle x-x\vee y}
are orthogonal). On the other hand, supermodularity and concavity together guarantee that
u
(
x
∨
y
−
λ
v
)
−
u
(
y
)
≥
u
(
x
)
−
u
(
x
∧
y
+
λ
v
)
.
{\displaystyle u(x\vee y-\lambda v)-u(y)\geq u(x)-u(x\wedge y+\lambda v).}
for any
λ
∈
[
0
,
1
]
{\displaystyle \lambda \in [0,1]}
, where
v
=
y
−
x
∧
y
=
x
∨
y
−
x
{\displaystyle v=y-x\wedge y=x\vee y-x}
. In this case, crucially, the four points
x
{\displaystyle x}
,
y
{\displaystyle y}
,
x
∨
y
−
λ
v
{\displaystyle x\vee y-\lambda v}
, and
x
∧
y
+
λ
v
{\displaystyle x\wedge y+\lambda v}
form a backward-leaning parallelogram in Euclidean space.
Monotone comparative statics under uncertainty
Let
X
⊂
R
{\displaystyle X\subset \mathbb {R} }
, and
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
be a family of real-valued functions defined on
X
{\displaystyle X}
that obey single crossing differences or the interval dominance order. Theorem 1 and 3 tell us that
arg
max
x
∈
X
f
(
x
,
;
s
)
{\displaystyle \arg \max _{x\in X}f(x,;s)}
is increasing in
s
{\displaystyle s}
. Interpreting
s
{\displaystyle s}
to be the state of the world, this says that the optimal action is increasing in the state if the state is known. Suppose, however, that the action
x
{\displaystyle x}
is taken before
s
{\displaystyle s}
is realized; then it seems reasonable that the optimal action should increase with the likelihood of higher states. To capture this notion formally, let
{
λ
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \{\lambda (\cdot ;t)\}_{t\in T}}
be a family of density functions parameterized by
t
{\displaystyle t}
in the poset
(
T
,
≥
T
)
{\displaystyle (T,\geq _{T})}
, where higher
t
{\displaystyle t}
is associated with a higher likelihood of higher states, either in the sense of first order stochastic dominance or the monotone likelihood ratio property. Choosing under uncertainty, the agent maximizes
F
(
x
;
t
)
=
∫
S
f
(
x
;
s
)
λ
(
s
;
t
)
d
s
.
{\displaystyle F(x;t)=\int _{S}f(x;s)\,\lambda (s;t)\,ds.}
For
arg
max
x
∈
X
F
(
x
;
t
)
{\displaystyle \arg \max _{x\in X}F(x;t)}
to be increasing in
t
{\displaystyle t}
, it suffices (by Theorems 1 and 2) that family
{
F
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \{F(\cdot ;t)\}_{t\in T}}
obey single crossing differences or the interval dominance order. The results in this section give condition under which this holds.
Theorem 5: Suppose
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
(
S
⊆
R
)
{\displaystyle (S\subseteq \mathbb {R} )}
obeys increasing differences. If
{
λ
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \{\lambda (\cdot ;t)\}_{t\in T}}
is ordered with respect to first order stochastic dominance, then
{
F
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \{F(\cdot ;t)\}_{t\in T}}
obeys increasing differences.
Proof: For any
x
′
,
x
∈
X
{\displaystyle x',x\in X}
, define
ϕ
(
s
)
:=
f
(
x
′
;
s
)
−
f
(
x
;
s
)
{\displaystyle \phi (s):=f(x';s)-f(x;s)}
. Then,
F
(
x
′
;
t
)
−
F
(
x
;
t
)
=
∫
S
[
f
(
x
′
;
s
)
−
f
(
x
;
s
)
]
λ
(
s
;
t
)
d
s
{\displaystyle F(x';t)-F(x;t)=\int _{S}[f(x';s)-f(x;s)]\lambda (s;t)ds}
, or equivalently
F
(
x
′
;
t
)
−
F
(
x
;
t
)
=
∫
S
ϕ
(
s
)
λ
(
s
;
t
)
d
s
{\displaystyle F(x';t)-F(x;t)=\int _{S}\phi (s)\lambda (s;t)ds}
. Since
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
obeys increasing differences,
ϕ
{\displaystyle \phi }
is increasing in
s
{\displaystyle s}
and first order stochastic dominance guarantees
F
(
x
′
;
t
)
−
F
(
x
;
t
)
{\displaystyle F(x';t)-F(x;t)}
is increasing in
t
{\displaystyle t}
. Q.E.D.
In the following theorem, X can be either ``single crossing differences" or ``the interval dominance order".
Theorem 6: Suppose
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
(for
S
⊆
R
{\displaystyle S\subseteq \mathbb {R} }
) obeys X. Then the family
{
F
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \{F(\cdot ;t)\}_{t\in T}}
obeys X if
{
λ
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \{\lambda (\cdot ;t)\}_{t\in T}}
is ordered with respect to the monotone likelihood ratio property.
The monotone likelihood ratio condition in this theorem cannot be weakened, as the next result demonstrates.
Proposition 2: Let
λ
(
⋅
;
t
′
)
{\displaystyle \lambda (\cdot ;t')}
and
λ
(
⋅
;
t
)
{\displaystyle \lambda (\cdot ;t)}
be two probability mass functions defined on
S
:=
{
1
,
2
,
…
,
N
}
{\displaystyle S:=\{1,2,\ldots ,N\}}
and suppose
λ
(
⋅
;
t
″
)
{\displaystyle \lambda (\cdot ;t'')}
is does not dominate
λ
(
⋅
;
t
′
)
{\displaystyle \lambda (\cdot ;t')}
with respect to the monotone likelihood ratio property. Then there is a family of functions
{
f
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{f(\cdot ;s)\}_{s\in S}}
, defined on
X
⊂
R
{\displaystyle X\subset \mathbb {R} }
, that obey single crossing differences, such that
arg
max
x
∈
X
F
(
x
;
t
″
)
<
arg
max
x
∈
X
F
(
x
;
t
′
)
{\displaystyle \arg \max _{x\in X}F(x;t'')<\arg \max _{x\in X}F(x;t')}
, where
F
(
x
;
t
)
=
∑
s
∈
S
λ
(
s
,
t
)
f
(
x
,
s
)
{\displaystyle F(x;t)=\sum _{s\in S}\lambda (s,t)f(x,s)}
(for
t
=
t
′
,
t
″
{\displaystyle t=t',\,t''}
).
Application (Optimal portfolio problem): An agent maximizes expected utility with the strictly increasing Bernoulli utility function
u
:
R
+
→
R
{\displaystyle u:\mathbb {R} _{+}\to \mathbb {R} }
. (Concavity is not assumed, so we allow the agent to be risk loving.) The wealth of the agent,
w
>
0
{\displaystyle w>0}
, can be invested in a safe or risky asset. The prices of the two assets are normalized at 1. The safe asset gives a constant return
R
≥
0
{\displaystyle R\geq 0}
, while the return of the risky asset
s
{\displaystyle s}
is governed by the
probability distribution
λ
(
s
;
t
)
{\displaystyle \lambda (s;t)}
. Let
x
{\displaystyle x}
denote the agent's investment in the risky asset. Then the wealth of the agent in state
s
{\displaystyle s}
is
(
w
−
x
)
R
+
x
s
{\displaystyle (w-x)R+xs}
. The agent chooses
x
{\displaystyle x}
to maximize
V
(
x
;
t
)
:=
∫
S
u
(
(
w
−
x
)
R
+
x
s
)
λ
(
s
;
t
)
d
s
.
{\displaystyle V(x;t):=\int _{S}u((w-x)R+xs)\lambda (s;t)\,ds.}
Note that
{
u
^
(
⋅
;
s
)
}
s
∈
S
{\displaystyle \{{\hat {u}}(\cdot ;s)\}_{s\in S}}
, where
u
^
(
x
;
s
)
:=
u
(
w
R
+
x
(
s
−
R
)
)
{\displaystyle {\hat {u}}(x;s):=u(wR+x(s-R))}
, obeys single crossing (though
not necessarily increasing) differences. By Theorem 6,
{
V
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \{V(\cdot ;t)\}_{t\in T}}
obeys single crossing differences, and hence
arg
max
x
≥
0
V
(
x
;
t
)
{\displaystyle \arg \max _{x\geq 0}V(x;t)}
is increasing in
t
{\displaystyle t}
, if
λ
(
⋅
;
t
)
}
t
∈
T
{\displaystyle \lambda (\cdot ;t)\}_{t\in T}}
is ordered with
respect to the monotone likelihood ratio property.
Aggregation of the single crossing property
While the sum of increasing functions is also increasing, it is clear that the single crossing property need not be preserved by aggregation. For the sum of single crossing functions to have the same property requires that the functions be related to each other in a particular manner.
Definition (monotone signed-ratio): Let
(
S
,
≥
S
)
{\displaystyle (S,\geq _{S})}
be a poset. Two functions
f
,
g
:
S
→
R
{\displaystyle f,g:S\to \mathbb {R} }
obey signed{ -}ratio monotonicity if, for any
s
′
≥
s
{\displaystyle s'\geq s}
, the following holds:
if
f
(
s
)
>
0
{\displaystyle f(s)>0}
and
g
(
s
)
<
0
{\displaystyle g(s)<0}
, then
−
g
(
s
)
f
(
s
)
≥
−
g
(
s
′
)
f
(
s
′
)
;
{\displaystyle -{\frac {g(s)}{f(s)}}\geq -{\frac {g(s')}{f(s')}};}
if
f
(
s
)
<
0
{\displaystyle f(s)<0}
and
g
(
s
)
>
0
{\displaystyle g(s)>0}
, then
−
f
(
s
)
g
(
s
)
≥
−
f
(
s
′
)
g
(
s
′
)
.
{\displaystyle -{\frac {f(s)}{g(s)}}\geq -{\frac {f(s')}{g(s')}}.}
Proposition 3: Let
f
{\displaystyle f}
and
g
{\displaystyle g}
be two single crossing functions. Then
α
f
+
β
g
{\displaystyle \alpha f+\beta g}
is a single crossing function for any non{-}negative scalars
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
if and only if
f
{\displaystyle f}
and
g
{\displaystyle g}
obey signed-ratio monotonicity.
Proof: Suppose that
f
(
s
)
>
0
{\displaystyle f(s)>0}
and
g
(
s
)
<
0
{\displaystyle g(s)<0}
. Define
α
∗
=
−
g
(
s
)
/
f
(
s
)
{\displaystyle \alpha ^{*}=-g(s)/f(s)}
, so that
α
∗
f
(
s
)
+
g
(
s
)
=
0
{\displaystyle \alpha ^{*}f(s)+g(s)=0}
. Since
α
∗
f
(
s
)
+
g
(
s
)
{\displaystyle \alpha ^{*}f(s)+g(s)}
is a single crossing function, it must be that
α
∗
f
(
s
′
)
+
g
(
s
′
)
≥
0
{\displaystyle \alpha ^{*}f(s')+g(s')\geq 0}
, for any
s
′
≥
s
{\displaystyle s'\geq s}
. Moreover, recall that since
f
{\displaystyle f}
is a single crossing function, then
f
(
s
′
)
>
0
{\displaystyle f(s')>0}
. By rearranging the above inequality, we conclude that
α
∗
=
−
g
(
s
)
f
(
s
)
≥
−
g
(
s
′
)
f
(
s
′
)
.
{\displaystyle \alpha ^{*}=-{\frac {g(s)}{f(s)}}\geq -{\frac {g(s')}{f(s')}}.}
To prove the converse, without loss of generality assume that
β
=
1
{\displaystyle \beta =1}
. Suppose that
α
f
(
s
)
+
g
(
s
)
≥
(
>
)
0.
{\displaystyle \alpha f(s)+g(s)\geq (>)0.}
If both
f
(
s
)
≥
0
{\displaystyle f(s)\geq 0}
and
g
(
s
)
≥
0
{\displaystyle g(s)\geq 0}
, then
f
(
s
′
)
≥
0
{\displaystyle f(s')\geq 0}
and
g
(
s
′
)
≥
0
{\displaystyle g(s')\geq 0}
since both functions are single crossing. Hence,
α
f
(
s
′
)
+
g
(
s
′
)
≥
(
>
)
0
{\displaystyle \alpha f(s')+g(s')\geq (>)0}
. Suppose that
g
(
s
)
<
0
{\displaystyle g(s)<0}
and
f
(
s
)
>
0
{\displaystyle f(s)>0}
. Since
f
{\displaystyle f}
and
g
{\displaystyle g}
obey signed{-}ratio monotonicity it must be that
α
≥
(
>
)
−
g
(
s
)
f
(
s
)
≥
−
g
(
s
′
)
f
(
s
′
)
.
{\displaystyle \alpha \geq (>)-{\frac {g(s)}{f(s)}}\geq -{\frac {g(s')}{f(s')}}.}
Since
f
{\displaystyle f}
is a single crossing function,
f
(
s
′
)
>
0
{\displaystyle f(s')>0}
, and so
α
f
(
s
′
)
+
g
(
s
′
)
≥
(
>
)
0.
{\displaystyle \alpha f(s')+g(s')\geq (>)\ 0.}
Q.E.D.
This result can be generalized to infinite sums in the following sense.
Theorem 7: Let
(
T
,
T
,
μ
)
{\displaystyle (T,{\mathcal {T}},\mu )}
be a finite measure space and suppose that, for each
s
∈
S
{\displaystyle s\in S}
,
f
(
s
;
t
)
{\displaystyle f(s;t)}
is a bounded and measurable function of
t
∈
T
{\displaystyle t\in T}
. Then
F
(
s
)
=
∫
T
f
(
s
;
t
)
d
μ
(
t
)
{\displaystyle F(s)=\int _{T}f(s;t)d\mu (t)}
is a single crossing function if, for all
t
{\displaystyle t}
,
t
′
∈
T
{\displaystyle t'\in T}
, the pair of functions
f
(
s
;
t
)
{\displaystyle f(s;t)}
and
f
(
s
;
t
′
)
{\displaystyle f(s;t')}
of
s
∈
S
{\displaystyle s\in S}
satisfy signed-ratio monotonicity. This condition is also necessary if
T
{\displaystyle {\mathcal {T}}}
contains all singleton sets and
F
{\displaystyle F}
is required to be a single crossing function for any finite measure
μ
{\displaystyle \mu }
.
Application (Monopoly problem under uncertainty): A firm faces uncertainty over the demand for its output
x
{\displaystyle x}
and the profit at state
t
∈
T
⊂
R
{\displaystyle t\in T\subset \mathbb {R} }
is given
by
Π
(
x
;
−
c
,
t
)
=
x
P
(
x
;
t
)
−
c
x
{\displaystyle \Pi (x;-c,t)=xP(x;t)-cx}
, where
c
{\displaystyle c}
is the marginal cost and
P
(
x
,
t
)
{\displaystyle P(x,t)}
is the inverse demand function in state
t
{\displaystyle t}
. The firm maximizes
V
(
x
;
−
c
)
=
∫
T
u
(
Π
(
x
;
−
c
,
t
)
)
d
λ
(
t
)
,
{\displaystyle V(x;-c)=\int _{T}u(\Pi (x;-c,t))d\lambda (t),}
where
λ
{\displaystyle \lambda }
is the probability of state
t
{\displaystyle t}
and
u
:
R
→
R
{\displaystyle u:\mathbb {R} \to \mathbb {R} }
is the Bernoulli utility function representing the firm’s attitude towards uncertainty. By Theorem 1,
arg
max
x
≥
0
V
(
x
;
−
c
)
{\displaystyle \arg \max _{x\geq 0}V(x;-c)}
is increasing in
−
c
{\displaystyle -c}
(i.e., output falls with marginal cost) if the family
{
V
(
x
;
−
c
)
}
c
∈
R
+
{\displaystyle \{V(x;-c)\}_{c\in \mathbb {R} _{+}}}
obeys single crossing differences. By definition, the latter says that, for any
x
′
≥
x
{\displaystyle x'\geq x}
, the function
Δ
(
−
c
)
=
∫
T
[
u
(
Π
(
x
′
;
−
c
,
t
)
)
−
u
(
Π
(
x
;
−
c
,
t
)
)
]
d
λ
(
t
)
,
{\displaystyle \Delta (-c)=\int _{T}[u(\Pi (x';-c,t))-u(\Pi (x;-c,t))]\,d\lambda (t),}
is a single crossing function. For each
t
{\displaystyle t}
,
δ
(
−
c
,
t
)
=
u
(
Π
(
x
′
;
−
c
,
t
)
)
−
u
(
Π
(
x
;
−
c
,
t
)
)
{\displaystyle \delta (-c,t)=u(\Pi (x';-c,t))-u(\Pi (x;-c,t))}
is s single crossing function of
−
c
{\displaystyle -c}
. However, unless
u
{\displaystyle u}
is linear,
δ
{\displaystyle \delta }
will not, in general, be increasing in
−
c
{\displaystyle -c}
. Applying Theorem 6,
Δ
{\displaystyle \Delta }
is a single crossing function if, for any
t
′
,
t
∈
T
{\displaystyle t',t\in T}
, the functions
δ
(
−
c
,
t
)
{\displaystyle \delta (-c,t)}
and
δ
(
−
c
,
t
′
)
{\displaystyle \delta (-c,t')}
(of
−
c
{\displaystyle -c}
) obey signed-ratio monotonicity. This is guaranteed when (i)
P
{\displaystyle P}
is decreasing in
x
{\displaystyle x}
and increasing in
t
{\displaystyle t}
and
{
log
(
P
(
⋅
,
t
)
)
}
t
∈
T
{\displaystyle \{\log(P(\cdot ,t))\}_{t\in T}}
obeys increasing differences; and (ii)
u
:
R
→
R
{\displaystyle u:\mathbb {R} \to \mathbb {R} }
is twice differentiable, with
u
′
>
0
{\displaystyle u'>0}
, and obeys decreasing absolute risk aversion (DARA).
See also
Comparative statics
Microeconomics
Model (economics)
Qualitative economics
Selected literature on monotone comparative statics and its applications
Basic techniques – Milgrom and Shannon (1994)., Milgrom (1994), Shannon (1995), Topkis (1998), Edlin and Shannon (1998), Athey (2002), Quah (2007), Quah and Strulovici (2009, 2012), Kukushkin (2013);
Production complementarities and their implications – Milgrom and Roberts (1990a, 1995); Topkis (1995);
Games with strategic complementarities – Milgrom and Roberts (1990b); Topkis (1979); Vives (1990);
Comparative statics of the consumer optimization problem – Antoniadou (2007); Quah (2007); Shirai (2013);
Monotone comparative statics under uncertainty – Athey (2002); Quah and Strulovici (2009, 2012);
Monotone comparative statics for models of politics – Gans and Smart (1996), Ashworth and Bueno de Mesquita (2006);
Comparative statics of optimal stopping problems – Quah and Strulovici (2009, 2013);
Monotone Bayesian games – Athey (2001); McAdams (2003); Quah and Strulovici (2012);
Bayesian games with strategic complementarities – Van Zandt (2010); Vives and Van Zandt (2007);
Auction theory – Athey (2001); McAdams (2007a,b); Reny and Zamir (2004);
Comparing information structures – Quah and Strulovici (2009);
Comparative statics in Industrial Organisation – Amir and Grilo (1999); Amir and Lambson (2003); Vives (2001);
Neoclassical optimal growth – Amir (1996b); Datta, Mirman, and Reffett (2002);
Multi-stage games – Vives (2009);
Dynamic stochastic games with infinite horizon – Amir (1996a, 2003); Balbus, Reffett, and Woźny (2013, 2014)
References
Kata Kunci Pencarian:
- Monotone comparative statics
- Comparative statics
- Single-crossing condition
- Paul Milgrom
- List of Nobel Memorial Prize laureates in Economic Sciences
- 2020 Nobel Memorial Prize in Economic Sciences
- Gordon–Loeb model
- Susan Athey
- Envelope theorem
- Donald John Roberts