- Source: Matrix Chernoff bound
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose
{
X
k
}
{\displaystyle \{\mathbf {X} _{k}\}}
is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
Pr
{
λ
max
(
∑
k
X
k
)
≥
t
}
{\displaystyle \Pr \left\{\lambda _{\max }\left(\sum _{k}\mathbf {X} _{k}\right)\geq t\right\}}
The following theorems answer this general question under various assumptions; these assumptions are named below by analogy to their classical, scalar counterparts. All of these theorems can be found in (Tropp 2010), as the specific application of a general result which is derived below. A summary of related works is given.
Matrix Gaussian and Rademacher series
= Self-adjoint matrices case
=Consider a finite sequence
{
A
k
}
{\displaystyle \{\mathbf {A} _{k}\}}
of fixed,
self-adjoint matrices with dimension
d
{\displaystyle d}
, and let
{
ξ
k
}
{\displaystyle \{\xi _{k}\}}
be a finite sequence of independent standard normal or independent Rademacher random variables.
Then, for all
t
≥
0
{\displaystyle t\geq 0}
,
Pr
{
λ
max
(
∑
k
ξ
k
A
k
)
≥
t
}
≤
d
⋅
e
−
t
2
/
2
σ
2
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\xi _{k}\mathbf {A} _{k}\right)\geq t\right\}\leq d\cdot e^{-t^{2}/2\sigma ^{2}}}
where
σ
2
=
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
= Rectangular case
=Consider a finite sequence
{
B
k
}
{\displaystyle \{\mathbf {B} _{k}\}}
of fixed matrices with dimension
d
1
×
d
2
{\displaystyle d_{1}\times d_{2}}
, and let
{
ξ
k
}
{\displaystyle \{\xi _{k}\}}
be a finite sequence of independent standard normal or independent Rademacher random variables.
Define the variance parameter
σ
2
=
max
{
‖
∑
k
B
k
B
k
∗
‖
,
‖
∑
k
B
k
∗
B
k
‖
}
.
{\displaystyle \sigma ^{2}=\max \left\{{\bigg \Vert }\sum _{k}\mathbf {B} _{k}\mathbf {B} _{k}^{*}{\bigg \Vert },{\bigg \Vert }\sum _{k}\mathbf {B} _{k}^{*}\mathbf {B} _{k}{\bigg \Vert }\right\}.}
Then, for all
t
≥
0
{\displaystyle t\geq 0}
,
Pr
{
‖
∑
k
ξ
k
B
k
‖
≥
t
}
≤
(
d
1
+
d
2
)
⋅
e
−
t
2
/
2
σ
2
.
{\displaystyle \Pr \left\{{\bigg \Vert }\sum _{k}\xi _{k}\mathbf {B} _{k}{\bigg \Vert }\geq t\right\}\leq (d_{1}+d_{2})\cdot e^{-t^{2}/2\sigma ^{2}}.}
Matrix Chernoff inequalities
The classical Chernoff bounds concern the sum of independent, nonnegative, and uniformly bounded random variables.
In the matrix setting, the analogous theorem concerns a sum of positive-semidefinite random matrices subjected to a uniform eigenvalue bound.
= Matrix Chernoff I
=Consider a finite sequence
{
X
k
}
{\displaystyle \{\mathbf {X} _{k}\}}
of independent, random, self-adjoint matrices with dimension
d
{\displaystyle d}
.
Assume that each random matrix satisfies
X
k
⪰
0
and
λ
max
(
X
k
)
≤
R
{\displaystyle \mathbf {X} _{k}\succeq \mathbf {0} \quad {\text{and}}\quad \lambda _{\text{max}}(\mathbf {X} _{k})\leq R}
almost surely.
Define
μ
min
=
λ
min
(
∑
k
E
X
k
)
and
μ
max
=
λ
max
(
∑
k
E
X
k
)
.
{\displaystyle \mu _{\text{min}}=\lambda _{\text{min}}\left(\sum _{k}\mathbb {E} \,\mathbf {X} _{k}\right)\quad {\text{and}}\quad \mu _{\text{max}}=\lambda _{\text{max}}\left(\sum _{k}\mathbb {E} \,\mathbf {X} _{k}\right).}
Then
Pr
{
λ
min
(
∑
k
X
k
)
≤
(
1
−
δ
)
μ
min
}
≤
d
⋅
[
e
−
δ
(
1
−
δ
)
1
−
δ
]
μ
min
/
R
for
δ
∈
[
0
,
1
)
, and
{\displaystyle \Pr \left\{\lambda _{\text{min}}\left(\sum _{k}\mathbf {X} _{k}\right)\leq (1-\delta )\mu _{\text{min}}\right\}\leq d\cdot \left[{\frac {e^{-\delta }}{(1-\delta )^{1-\delta }}}\right]^{\mu _{\text{min}}/R}\quad {\text{for }}\delta \in [0,1){\text{, and}}}
Pr
{
λ
max
(
∑
k
X
k
)
≥
(
1
+
δ
)
μ
max
}
≤
d
⋅
[
e
δ
(
1
+
δ
)
1
+
δ
]
μ
max
/
R
for
δ
≥
0.
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\mathbf {X} _{k}\right)\geq (1+\delta )\mu _{\text{max}}\right\}\leq d\cdot \left[{\frac {e^{\delta }}{(1+\delta )^{1+\delta }}}\right]^{\mu _{\text{max}}/R}\quad {\text{for }}\delta \geq 0.}
= Matrix Chernoff II
=Consider a sequence
{
X
k
:
k
=
1
,
2
,
…
,
n
}
{\displaystyle \{\mathbf {X} _{k}:k=1,2,\ldots ,n\}}
of independent, random, self-adjoint matrices that satisfy
X
k
⪰
0
and
λ
max
(
X
k
)
≤
1
{\displaystyle \mathbf {X} _{k}\succeq \mathbf {0} \quad {\text{and}}\quad \lambda _{\text{max}}(\mathbf {X} _{k})\leq 1}
almost surely.
Compute the minimum and maximum eigenvalues of the average expectation,
μ
¯
min
=
λ
min
(
1
n
∑
k
=
1
n
E
X
k
)
and
μ
¯
max
=
λ
max
(
1
n
∑
k
=
1
n
E
X
k
)
.
{\displaystyle {\bar {\mu }}_{\text{min}}=\lambda _{\text{min}}\left({\frac {1}{n}}\sum _{k=1}^{n}\mathbb {E} \,\mathbf {X} _{k}\right)\quad {\text{and}}\quad {\bar {\mu }}_{\text{max}}=\lambda _{\text{max}}\left({\frac {1}{n}}\sum _{k=1}^{n}\mathbb {E} \,\mathbf {X} _{k}\right).}
Then
Pr
{
λ
min
(
1
n
∑
k
=
1
n
X
k
)
≤
α
}
≤
d
⋅
e
−
n
D
(
α
‖
μ
¯
min
)
for
0
≤
α
≤
μ
¯
min
, and
{\displaystyle \Pr \left\{\lambda _{\text{min}}\left({\frac {1}{n}}\sum _{k=1}^{n}\mathbf {X} _{k}\right)\leq \alpha \right\}\leq d\cdot e^{-nD(\alpha \Vert {\bar {\mu }}_{\text{min}})}\quad {\text{for }}0\leq \alpha \leq {\bar {\mu }}_{\text{min}}{\text{, and}}}
Pr
{
λ
max
(
1
n
∑
k
=
1
n
X
k
)
≥
α
}
≤
d
⋅
e
−
n
D
(
α
‖
μ
¯
max
)
for
μ
¯
max
≤
α
≤
1.
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left({\frac {1}{n}}\sum _{k=1}^{n}\mathbf {X} _{k}\right)\geq \alpha \right\}\leq d\cdot e^{-nD(\alpha \Vert {\bar {\mu }}_{\text{max}})}\quad {\text{for }}{\bar {\mu }}_{\text{max}}\leq \alpha \leq 1.}
The binary information divergence is defined as
D
(
a
‖
u
)
=
a
(
log
a
−
log
u
)
+
(
1
−
a
)
(
log
(
1
−
a
)
−
log
(
1
−
u
)
)
{\displaystyle D(a\Vert u)=a\left(\log a-\log u\right)+(1-a)\left(\log(1-a)-\log(1-u)\right)}
for
a
,
u
∈
[
0
,
1
]
{\displaystyle a,u\in [0,1]}
.
Matrix Bennett and Bernstein inequalities
In the scalar setting, Bennett and Bernstein inequalities describe the upper tail of a sum of independent, zero-mean random variables that are either bounded or subexponential. In the matrix
case, the analogous results concern a sum of zero-mean random matrices.
= Bounded case
=Consider a finite sequence
{
X
k
}
{\displaystyle \{\mathbf {X} _{k}\}}
of independent, random, self-adjoint matrices with dimension
d
{\displaystyle d}
.
Assume that each random matrix satisfies
E
X
k
=
0
and
λ
max
(
X
k
)
≤
R
{\displaystyle \mathbb {E} \mathbf {X} _{k}=\mathbf {0} \quad {\text{and}}\quad \lambda _{\text{max}}(\mathbf {X} _{k})\leq R}
almost surely.
Compute the norm of the total variance,
σ
2
=
‖
∑
k
E
(
X
k
2
)
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbb {E} \,(\mathbf {X} _{k}^{2}){\bigg \Vert }.}
Then, the following chain of inequalities holds for all
t
≥
0
{\displaystyle t\geq 0}
:
Pr
{
λ
max
(
∑
k
X
k
)
≥
t
}
≤
d
⋅
exp
(
−
σ
2
R
2
⋅
h
(
R
t
σ
2
)
)
≤
d
⋅
exp
(
−
t
2
σ
2
+
R
t
/
3
)
≤
{
d
⋅
exp
(
−
3
t
2
/
8
σ
2
)
for
t
≤
σ
2
/
R
;
d
⋅
exp
(
−
3
t
/
8
R
)
for
t
≥
σ
2
/
R
.
{\displaystyle {\begin{aligned}\Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\mathbf {X} _{k}\right)\geq t\right\}&\leq d\cdot \exp \left(-{\frac {\sigma ^{2}}{R^{2}}}\cdot h\left({\frac {Rt}{\sigma ^{2}}}\right)\right)\\&\leq d\cdot \exp \left({\frac {-t^{2}}{\sigma ^{2}+Rt/3}}\right)\\&\leq {\begin{cases}d\cdot \exp(-3t^{2}/8\sigma ^{2})\quad &{\text{for }}t\leq \sigma ^{2}/R;\\d\cdot \exp(-3t/8R)\quad &{\text{for }}t\geq \sigma ^{2}/R.\\\end{cases}}\end{aligned}}}
The function
h
(
u
)
{\displaystyle h(u)}
is defined as
h
(
u
)
=
(
1
+
u
)
log
(
1
+
u
)
−
u
{\displaystyle h(u)=(1+u)\log(1+u)-u}
for
u
≥
0
{\displaystyle u\geq 0}
.
= Subexponential case
=Consider a finite sequence
{
X
k
}
{\displaystyle \{\mathbf {X} _{k}\}}
of independent, random, self-adjoint matrices with dimension
d
{\displaystyle d}
.
Assume that
E
X
k
=
0
and
E
(
X
k
p
)
⪯
p
!
2
⋅
R
p
−
2
A
k
2
{\displaystyle \mathbb {E} \,\mathbf {X} _{k}=\mathbf {0} \quad {\text{and}}\quad \mathbb {E} \,(\mathbf {X} _{k}^{p})\preceq {\frac {p!}{2}}\cdot R^{p-2}\mathbf {A} _{k}^{2}}
for
p
=
2
,
3
,
4
,
…
{\displaystyle p=2,3,4,\ldots }
.
Compute the variance parameter,
σ
2
=
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
Then, the following chain of inequalities holds for all
t
≥
0
{\displaystyle t\geq 0}
:
Pr
{
λ
max
(
∑
k
X
k
)
≥
t
}
≤
d
⋅
exp
(
−
t
2
/
2
σ
2
+
R
t
)
≤
{
d
⋅
exp
(
−
t
2
/
4
σ
2
)
for
t
≤
σ
2
/
R
;
d
⋅
exp
(
−
t
/
4
R
)
for
t
≥
σ
2
/
R
.
{\displaystyle {\begin{aligned}\Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\mathbf {X} _{k}\right)\geq t\right\}&\leq d\cdot \exp \left({\frac {-t^{2}/2}{\sigma ^{2}+Rt}}\right)\\&\leq {\begin{cases}d\cdot \exp(-t^{2}/4\sigma ^{2})\quad &{\text{for }}t\leq \sigma ^{2}/R;\\d\cdot \exp(-t/4R)\quad &{\text{for }}t\geq \sigma ^{2}/R.\\\end{cases}}\end{aligned}}}
= Rectangular case
=Consider a finite sequence
{
Z
k
}
{\displaystyle \{\mathbf {Z} _{k}\}}
of independent, random, matrices with dimension
d
1
×
d
2
{\displaystyle d_{1}\times d_{2}}
.
Assume that each random matrix satisfies
E
Z
k
=
0
and
‖
Z
k
‖
≤
R
{\displaystyle \mathbb {E} \,\mathbf {Z} _{k}=\mathbf {0} \quad {\text{and}}\quad \Vert \mathbf {Z} _{k}\Vert \leq R}
almost surely.
Define the variance parameter
σ
2
=
max
{
‖
∑
k
E
(
Z
k
Z
k
∗
)
‖
,
‖
∑
k
E
(
Z
k
∗
Z
k
)
‖
}
.
{\displaystyle \sigma ^{2}=\max \left\{{\bigg \Vert }\sum _{k}\mathbb {E} \,(\mathbf {Z} _{k}\mathbf {Z} _{k}^{*}){\bigg \Vert },{\bigg \Vert }\sum _{k}\mathbb {E} \,(\mathbf {Z} _{k}^{*}\mathbf {Z} _{k}){\bigg \Vert }\right\}.}
Then, for all
t
≥
0
{\displaystyle t\geq 0}
Pr
{
‖
∑
k
Z
k
‖
≥
t
}
≤
(
d
1
+
d
2
)
⋅
exp
(
−
t
2
/
2
σ
2
+
R
t
/
3
)
{\displaystyle \Pr \left\{{\bigg \Vert }\sum _{k}\mathbf {Z} _{k}{\bigg \Vert }\geq t\right\}\leq (d_{1}+d_{2})\cdot \exp \left({\frac {-t^{2}/2}{\sigma ^{2}+Rt/3}}\right)}
holds.
Matrix Azuma, Hoeffding, and McDiarmid inequalities
= Matrix Azuma
=The scalar version of Azuma's inequality states that a scalar martingale exhibits normal concentration about its mean value, and the scale for deviations is controlled by the total maximum squared range of the difference sequence.
The following is the extension in matrix setting.
Consider a finite adapted sequence
{
X
k
}
{\displaystyle \{\mathbf {X} _{k}\}}
of self-adjoint matrices with dimension
d
{\displaystyle d}
, and a fixed sequence
{
A
k
}
{\displaystyle \{\mathbf {A} _{k}\}}
of self-adjoint matrices that satisfy
E
k
−
1
X
k
=
0
and
X
k
2
⪯
A
k
2
{\displaystyle \mathbb {E} _{k-1}\,\mathbf {X} _{k}=\mathbf {0} \quad {\text{and}}\quad \mathbf {X} _{k}^{2}\preceq \mathbf {A} _{k}^{2}}
almost surely.
Compute the variance parameter
σ
2
=
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
Then, for all
t
≥
0
{\displaystyle t\geq 0}
Pr
{
λ
max
(
∑
k
X
k
)
≥
t
}
≤
d
⋅
e
−
t
2
/
8
σ
2
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\mathbf {X} _{k}\right)\geq t\right\}\leq d\cdot e^{-t^{2}/8\sigma ^{2}}}
The constant 1/8 can be improved to 1/2 when there is additional information available. One case occurs when each summand
X
k
{\displaystyle \mathbf {X} _{k}}
is conditionally symmetric.
Another example requires the assumption that
X
k
{\displaystyle \mathbf {X} _{k}}
commutes almost surely with
A
k
{\displaystyle \mathbf {A} _{k}}
.
= Matrix Hoeffding
=Placing addition assumption that the summands in Matrix Azuma are independent gives a matrix extension of Hoeffding's inequalities.
Consider a finite sequence
{
X
k
}
{\displaystyle \{\mathbf {X} _{k}\}}
of independent, random, self-adjoint matrices with dimension
d
{\displaystyle d}
, and let
{
A
k
}
{\displaystyle \{\mathbf {A} _{k}\}}
be a sequence of fixed self-adjoint matrices.
Assume that each random matrix satisfies
E
X
k
=
0
and
X
k
2
⪯
A
k
2
{\displaystyle \mathbb {E} \,\mathbf {X} _{k}=\mathbf {0} \quad {\text{and}}\quad \mathbf {X} _{k}^{2}\preceq \mathbf {A} _{k}^{2}}
almost surely.
Then, for all
t
≥
0
{\displaystyle t\geq 0}
Pr
{
λ
max
(
∑
k
X
k
)
≥
t
}
≤
d
⋅
e
−
t
2
/
8
σ
2
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\mathbf {X} _{k}\right)\geq t\right\}\leq d\cdot e^{-t^{2}/8\sigma ^{2}}}
where
σ
2
=
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
An improvement of this result was established in (Mackey et al. 2012):
for all
t
≥
0
{\displaystyle t\geq 0}
Pr
{
λ
max
(
∑
k
X
k
)
≥
t
}
≤
d
⋅
e
−
t
2
/
2
σ
2
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\mathbf {X} _{k}\right)\geq t\right\}\leq d\cdot e^{-t^{2}/2\sigma ^{2}}}
where
σ
2
=
1
2
‖
∑
k
A
k
2
+
E
X
k
2
‖
≤
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\frac {1}{2}}{\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}+\mathbb {E} \,\mathbf {X} _{k}^{2}{\bigg \Vert }\leq {\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
= Matrix bounded difference (McDiarmid)
=In scalar setting, McDiarmid's inequality provides one common way of bounding the differences by applying Azuma's inequality to a Doob martingale. A version of the bounded differences inequality holds in the matrix setting.
Let
{
Z
k
:
k
=
1
,
2
,
…
,
n
}
{\displaystyle \{Z_{k}:k=1,2,\ldots ,n\}}
be an independent, family of random variables, and let
H
{\displaystyle \mathbf {H} }
be a function that maps
n
{\displaystyle n}
variables to a self-adjoint matrix of dimension
d
{\displaystyle d}
.
Consider a sequence
{
A
k
}
{\displaystyle \{\mathbf {A} _{k}\}}
of fixed self-adjoint matrices that satisfy
(
H
(
z
1
,
…
,
z
k
,
…
,
z
n
)
−
H
(
z
1
,
…
,
z
k
′
,
…
,
z
n
)
)
2
⪯
A
k
2
,
{\displaystyle \left(\mathbf {H} (z_{1},\ldots ,z_{k},\ldots ,z_{n})-\mathbf {H} (z_{1},\ldots ,z'_{k},\ldots ,z_{n})\right)^{2}\preceq \mathbf {A} _{k}^{2},}
where
z
i
{\displaystyle z_{i}}
and
z
i
′
{\displaystyle z'_{i}}
range over all possible values of
Z
i
{\displaystyle Z_{i}}
for each index
i
{\displaystyle i}
.
Compute the variance parameter
σ
2
=
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
Then, for all
t
≥
0
{\displaystyle t\geq 0}
Pr
{
λ
max
(
H
(
z
)
−
E
H
(
z
)
)
≥
t
}
≤
d
⋅
e
−
t
2
/
8
σ
2
,
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\mathbf {H} (\mathbf {z} )-\mathbb {E} \,\mathbf {H} (\mathbf {z} )\right)\geq t\right\}\leq d\cdot e^{-t^{2}/8\sigma ^{2}},}
where
z
=
(
Z
1
,
…
,
Z
n
)
{\displaystyle \mathbf {z} =(Z_{1},\ldots ,Z_{n})}
.
An improvement of this result was established in (Paulin, Mackey & Tropp 2013) (see also (Paulin, Mackey & Tropp 2016)):
for all
t
≥
0
{\displaystyle t\geq 0}
Pr
{
λ
max
(
H
(
z
)
−
E
H
(
z
)
)
≥
t
}
≤
d
⋅
e
−
t
2
/
σ
2
,
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\mathbf {H} (\mathbf {z} )-\mathbb {E} \,\mathbf {H} (\mathbf {z} )\right)\geq t\right\}\leq d\cdot e^{-t^{2}/\sigma ^{2}},}
where
z
=
(
Z
1
,
…
,
Z
n
)
{\displaystyle \mathbf {z} =(Z_{1},\ldots ,Z_{n})}
and
σ
2
=
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
Survey of related theorems
The first bounds of this type were derived by (Ahlswede & Winter 2003). Recall the theorem above for self-adjoint matrix Gaussian and Rademacher bounds:
For a finite sequence
{
A
k
}
{\displaystyle \{\mathbf {A} _{k}\}}
of fixed,
self-adjoint matrices with dimension
d
{\displaystyle d}
and for
{
ξ
k
}
{\displaystyle \{\xi _{k}\}}
a finite sequence of independent standard normal or independent Rademacher random variables, then
Pr
{
λ
max
(
∑
k
ξ
k
A
k
)
≥
t
}
≤
d
⋅
e
−
t
2
/
2
σ
2
{\displaystyle \Pr \left\{\lambda _{\text{max}}\left(\sum _{k}\xi _{k}\mathbf {A} _{k}\right)\geq t\right\}\leq d\cdot e^{-t^{2}/2\sigma ^{2}}}
where
σ
2
=
‖
∑
k
A
k
2
‖
.
{\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
Ahlswede and Winter would give the same result, except with
σ
A
W
2
=
∑
k
λ
max
(
A
k
2
)
{\displaystyle \sigma _{AW}^{2}=\sum _{k}\lambda _{\max }\left(\mathbf {A} _{k}^{2}\right)}
.
By comparison, the
σ
2
{\displaystyle \sigma ^{2}}
in the theorem above commutes
Σ
{\displaystyle \Sigma }
and
λ
max
{\displaystyle \lambda _{\max }}
; that is, it is the largest eigenvalue of the sum rather than the sum of the largest eigenvalues. It is never larger than the Ahlswede–Winter value (by the norm triangle inequality), but can be much smaller. Therefore, the theorem above gives a tighter bound than the Ahlswede–Winter result.
The chief contribution of (Ahlswede & Winter 2003) was the extension of the Laplace-transform method used to prove the scalar Chernoff bound (see Chernoff bound#Additive form (absolute error)) to the case of self-adjoint matrices. The procedure given in the derivation below. All of the recent works on this topic follow this same procedure, and the chief differences follow from subsequent steps. Ahlswede & Winter use the Golden–Thompson inequality to proceed, whereas Tropp (Tropp 2010) uses Lieb's Theorem.
Suppose one wished to vary the length of the series (n) and the dimensions of the
matrices (d) while keeping the right-hand side approximately constant. Then
n must vary approximately as the log of d. Several papers have attempted to establish a bound without a dependence on dimensions. Rudelson and Vershynin (Rudelson & Vershynin 2007) give a result for matrices which are the outer product of two vectors. (Magen & Zouzias 2010) provide a result without the dimensional dependence for low rank matrices. The original result was derived independently from the Ahlswede–Winter approach, but (Oliveira 2010b) proves a similar result using the Ahlswede–Winter approach.
Finally, Oliveira (Oliveira 2010a) proves a result for matrix martingales independently from the Ahlswede–Winter framework. Tropp (Tropp 2011) slightly improves on the result using the Ahlswede–Winter framework. Neither result is presented in this article.
Derivation and proof
= Ahlswede and Winter
=The Laplace transform argument found in (Ahlswede & Winter 2003) is a significant result in its own right:
Let
Y
{\displaystyle \mathbf {Y} }
be a random self-adjoint matrix. Then
Pr
{
λ
max
(
Y
)
≥
t
}
≤
inf
θ
>
0
{
e
−
θ
t
⋅
E
[
tr
e
θ
Y
]
}
.
{\displaystyle \Pr \left\{\lambda _{\max }(Y)\geq t\right\}\leq \inf _{\theta >0}\left\{e^{-\theta t}\cdot \operatorname {E} \left[\operatorname {tr} e^{\theta \mathbf {Y} }\right]\right\}.}
To prove this, fix
θ
>
0
{\displaystyle \theta >0}
. Then
Pr
{
λ
max
(
Y
)
≥
t
}
=
Pr
{
λ
max
(
θ
Y
)
≥
θ
t
}
=
Pr
{
e
λ
max
(
θ
Y
)
≥
e
θ
t
}
≤
e
−
θ
t
E
e
λ
max
(
θ
Y
)
≤
e
−
θ
t
E
tr
e
(
θ
Y
)
{\displaystyle {\begin{aligned}\Pr \left\{\lambda _{\max }(\mathbf {Y} )\geq t\right\}&=\Pr \left\{\lambda _{\max }(\mathbf {\theta Y} )\geq \theta t\right\}\\&=\Pr \left\{e^{\lambda _{\max }(\theta \mathbf {Y} )}\geq e^{\theta t}\right\}\\&\leq e^{-\theta t}\operatorname {E} e^{\lambda _{\max }(\theta \mathbf {Y} )}\\&\leq e^{-\theta t}\operatorname {E} \operatorname {tr} e^{(\theta \mathbf {Y} )}\end{aligned}}}
The second-to-last inequality is Markov's inequality. The last inequality holds since
e
λ
max
(
θ
Y
)
=
λ
max
(
e
θ
Y
)
≤
tr
(
e
θ
Y
)
{\displaystyle e^{\lambda _{\max }(\theta \mathbf {Y} )}=\lambda _{\max }(e^{\theta \mathbf {Y} })\leq \operatorname {tr} (e^{\theta \mathbf {Y} })}
. Since the left-most quantity is independent of
θ
{\displaystyle \theta }
, the infimum over
θ
>
0
{\displaystyle \theta >0}
remains an upper bound for it.
Thus, our task is to understand
E
[
tr
(
e
θ
Y
)
]
{\displaystyle \operatorname {E} [\operatorname {tr} (e^{\theta \mathbf {Y} })]}
Nevertheless, since trace and expectation are both linear, we can commute them, so it is sufficient to consider
E
e
θ
Y
:=
M
Y
(
θ
)
{\displaystyle \operatorname {E} e^{\theta \mathbf {Y} }:=\mathbf {M} _{\mathbf {Y} }(\theta )}
, which we call the matrix generating function. This is where the methods of (Ahlswede & Winter 2003) and (Tropp 2010) diverge. The immediately following presentation follows (Ahlswede & Winter 2003).
The Golden–Thompson inequality implies that
tr
M
X
1
+
X
2
(
θ
)
≤
tr
[
(
E
e
θ
X
1
)
(
E
e
θ
X
2
)
]
=
tr
M
X
1
(
θ
)
M
X
2
(
θ
)
{\displaystyle \operatorname {tr} \mathbf {M} _{\mathbf {X} _{1}+\mathbf {X} _{2}}(\theta )\leq \operatorname {tr} \left[\left(\operatorname {E} e^{\theta \mathbf {X} _{1}}\right)\left(\operatorname {E} e^{\theta \mathbf {X} _{2}}\right)\right]=\operatorname {tr} \mathbf {M} _{\mathbf {X} _{1}}(\theta )\mathbf {M} _{\mathbf {X} _{2}}(\theta )}
, where we used the linearity of expectation several times.
Suppose
Y
=
∑
k
X
k
{\displaystyle \mathbf {Y} =\sum _{k}\mathbf {X} _{k}}
. We can find an upper bound for
tr
M
Y
(
θ
)
{\displaystyle \operatorname {tr} \mathbf {M} _{\mathbf {Y} }(\theta )}
by iterating this result. Noting that
tr
(
A
B
)
≤
tr
(
A
)
λ
max
(
B
)
{\displaystyle \operatorname {tr} (\mathbf {AB} )\leq \operatorname {tr} (\mathbf {A} )\lambda _{\max }(\mathbf {B} )}
, then
tr
M
Y
(
θ
)
≤
tr
[
(
E
e
∑
k
=
1
n
−
1
θ
X
k
)
(
E
e
θ
X
n
)
]
≤
tr
(
E
e
∑
k
=
1
n
−
1
θ
X
k
)
λ
max
(
E
e
θ
X
n
)
.
{\displaystyle \operatorname {tr} \mathbf {M} _{\mathbf {Y} }(\theta )\leq \operatorname {tr} \left[\left(\operatorname {E} e^{\sum _{k=1}^{n-1}\theta \mathbf {X} _{k}}\right)\left(\operatorname {E} e^{\theta \mathbf {X} _{n}}\right)\right]\leq \operatorname {tr} \left(\operatorname {E} e^{\sum _{k=1}^{n-1}\theta \mathbf {X} _{k}}\right)\lambda _{\max }(\operatorname {E} e^{\theta \mathbf {X} _{n}}).}
Iterating this, we get
tr
M
Y
(
θ
)
≤
(
tr
I
)
[
Π
k
λ
max
(
E
e
θ
X
k
)
]
=
d
e
∑
k
λ
max
(
log
E
e
θ
X
k
)
{\displaystyle \operatorname {tr} \mathbf {M} _{\mathbf {Y} }(\theta )\leq (\operatorname {tr} \mathbf {I} )\left[\Pi _{k}\lambda _{\max }(\operatorname {E} e^{\theta \mathbf {X} _{k}})\right]=de^{\sum _{k}\lambda _{\max }\left(\log \operatorname {E} e^{\theta \mathbf {X} _{k}}\right)}}
So far we have found a bound with an infimum over
θ
{\displaystyle \theta }
. In turn, this can be bounded. At any rate, one can see how the Ahlswede–Winter bound arises as the sum of largest eigenvalues.
= Tropp
=The major contribution of (Tropp 2010) is the application of Lieb's theorem where (Ahlswede & Winter 2003) had applied the Golden–Thompson inequality. Tropp's corollary is the following: If
H
{\displaystyle H}
is a fixed self-adjoint matrix and
X
{\displaystyle X}
is a random self-adjoint matrix, then
E
tr
e
H
+
X
≤
tr
e
H
+
log
(
E
e
X
)
{\displaystyle \operatorname {E} \operatorname {tr} e^{\mathbf {H} +\mathbf {X} }\leq \operatorname {tr} e^{\mathbf {H} +\log(\operatorname {E} e^{\mathbf {X} })}}
Proof: Let
Y
=
e
X
{\displaystyle \mathbf {Y} =e^{\mathbf {X} }}
. Then Lieb's theorem tells us that
f
(
Y
)
=
tr
e
H
+
log
(
Y
)
{\displaystyle f(\mathbf {Y} )=\operatorname {tr} e^{\mathbf {H} +\log(\mathbf {Y} )}}
is concave.
The final step is to use Jensen's inequality to move the expectation inside the function:
E
tr
e
H
+
log
(
Y
)
≤
tr
e
H
+
log
(
E
Y
)
.
{\displaystyle \operatorname {E} \operatorname {tr} e^{\mathbf {H} +\log(\mathbf {Y} )}\leq \operatorname {tr} e^{\mathbf {H} +\log(\operatorname {E} \mathbf {Y} )}.}
This gives us the major result of the paper: the subadditivity of the log of the matrix generating function.
Subadditivity of log mgf
Let
X
k
{\displaystyle \mathbf {X} _{k}}
be a finite sequence of independent, random self-adjoint matrices. Then for all
θ
∈
R
{\displaystyle \theta \in \mathbb {R} }
,
tr
M
∑
k
X
k
(
θ
)
≤
tr
e
∑
k
log
M
X
k
(
θ
)
{\displaystyle \operatorname {tr} \mathbf {M} _{\sum _{k}\mathbf {X} _{k}}(\theta )\leq \operatorname {tr} e^{\sum _{k}\log \mathbf {M} _{\mathbf {X} _{k}}(\theta )}}
Proof: It is sufficient to let
θ
=
1
{\displaystyle \theta =1}
. Expanding the definitions, we need to show that
E
tr
e
∑
k
θ
X
k
≤
tr
e
∑
k
log
E
e
θ
X
k
.
{\displaystyle \operatorname {E} \operatorname {tr} e^{\sum _{k}\theta \mathbf {X} _{k}}\leq \operatorname {tr} e^{\sum _{k}\log \operatorname {E} e^{\theta \mathbf {X} _{k}}}.}
To complete the proof, we use the law of total expectation. Let
E
k
{\displaystyle \operatorname {E} _{k}}
be the expectation conditioned on
X
1
,
…
,
X
k
{\displaystyle \mathbf {X} _{1},\ldots ,\mathbf {X} _{k}}
. Since we assume all the
X
i
{\displaystyle \mathbf {X} _{i}}
are independent,
E
k
−
1
e
X
k
=
E
e
X
k
.
{\displaystyle \operatorname {E} _{k-1}e^{\mathbf {X} _{k}}=\operatorname {E} e^{\mathbf {X} _{k}}.}
Define
Ξ
k
=
log
E
k
−
1
e
X
k
=
log
M
X
k
(
θ
)
{\displaystyle \mathbf {\Xi } _{k}=\log \operatorname {E} _{k-1}e^{\mathbf {X} _{k}}=\log \mathbf {M} _{\mathbf {X} _{k}}(\theta )}
.
Finally, we have
E
tr
e
∑
k
=
1
n
X
k
=
E
0
⋯
E
n
−
1
tr
e
∑
k
=
1
n
−
1
X
k
+
X
n
≤
E
0
⋯
E
n
−
2
tr
e
∑
k
=
1
n
−
1
X
k
+
log
(
E
n
−
1
e
X
n
)
=
E
0
⋯
E
n
−
2
tr
e
∑
k
=
1
n
−
2
X
k
+
X
n
−
1
+
Ξ
n
⋮
=
tr
e
∑
k
=
1
n
Ξ
k
{\displaystyle {\begin{aligned}\operatorname {E} \operatorname {tr} e^{\sum _{k=1}^{n}\mathbf {X} _{k}}&=\operatorname {E} _{0}\cdots \operatorname {E} _{n-1}\operatorname {tr} e^{\sum _{k=1}^{n-1}\mathbf {X} _{k}+\mathbf {X} _{n}}\\&\leq \operatorname {E} _{0}\cdots \operatorname {E} _{n-2}\operatorname {tr} e^{\sum _{k=1}^{n-1}\mathbf {X} _{k}+\log(\operatorname {E} _{n-1}e^{\mathbf {X} _{n}})}\\&=\operatorname {E} _{0}\cdots \operatorname {E} _{n-2}\operatorname {tr} e^{\sum _{k=1}^{n-2}\mathbf {X} _{k}+\mathbf {X} _{n-1}+\mathbf {\Xi } _{n}}\\&\vdots \\&=\operatorname {tr} e^{\sum _{k=1}^{n}\mathbf {\Xi } _{k}}\end{aligned}}}
where at every step m we use Tropp's corollary with
H
m
=
∑
k
=
1
m
−
1
X
k
+
∑
k
=
m
+
1
n
Ξ
k
{\displaystyle \mathbf {H} _{m}=\sum _{k=1}^{m-1}\mathbf {X} _{k}+\sum _{k=m+1}^{n}\mathbf {\Xi } _{k}}
Master tail bound
The following is immediate from the previous result:
Pr
{
λ
max
(
∑
k
X
k
)
≥
t
}
≤
inf
θ
>
0
{
e
−
θ
t
tr
e
∑
k
log
M
X
k
(
θ
)
}
{\displaystyle \Pr \left\{\lambda _{\max }\left(\sum _{k}\mathbf {X} _{k}\right)\geq t\right\}\leq \inf _{\theta >0}\left\{e^{-\theta t}\operatorname {tr} e^{\sum _{k}\log \mathbf {M} _{\mathbf {X} _{k}}(\theta )}\right\}}
All of the theorems given above are derived from this bound; the theorems consist in various ways to bound the infimum. These steps are significantly simpler than the proofs given.
References
Ahlswede, R.; Winter, A. (2003). "Strong Converse for Identification via Quantum Channels". IEEE Transactions on Information Theory. 48 (3): 569–579. arXiv:quant-ph/0012127. doi:10.1109/18.985947. S2CID 523176.
Mackey, L.; Jordan, M. I.; Chen, R. Y.; Farrell, B.; Tropp, J. A. (2012). "Matrix Concentration Inequalities via the Method of Exchangeable Pairs". The Annals of Probability. 42 (3): 906–945. arXiv:1201.6002. doi:10.1214/13-AOP892. S2CID 9635314.
Magen, A.; Zouzias, A. (2010). "Low-Rank Matrix-valued Chernoff Bounds and Approximate Matrix Multiplication". arXiv:1005.2724 [cs.DS].
Oliveira, R.I. (2010a). "Concentration of the adjacency matrix and of the Laplacian in random graphs with independent edges". arXiv:0911.0600 [math.CO].
Oliveira, R.I. (2010b). "Sums of random Hermitian matrices and an inequality by Rudelson". arXiv:1004.3821 [math.PR].
Paulin, D.; Mackey, L.; Tropp, J. A. (2013). "Deriving Matrix Concentration Inequalities from Kernel Couplings". arXiv:1305.0612 [math.PR].
Paulin, D.; Mackey, L.; Tropp, J. A. (2016). "Efron–Stein inequalities for random matrices". The Annals of Probability. 44 (5): 3431–3473. arXiv:1408.3470. doi:10.1214/15-AOP1054. S2CID 16263460.
Rudelson, M.; Vershynin, R. (2007). "Sampling from large matrices: an approach through geometric functional analysis". Journal of the Association for Computing Machinery. 54 (4 ed.). arXiv:math/9608208. Bibcode:1996math......8208R. doi:10.1145/1255443.1255449. S2CID 6054789.
Tropp, J. (2011). "Freedman's inequality for matrix martingales". arXiv:1101.3039 [math.PR].
Tropp, J. (2010). "User-friendly tail bounds for sums of random matrices". Foundations of Computational Mathematics. 12 (4): 389–434. arXiv:1004.4389. doi:10.1007/s10208-011-9099-z. S2CID 17735965.