- Source: Spectral radius
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).
Definition
= Matrices
=Let λ1, ..., λn be the eigenvalues of a matrix A ∈ Cn×n. The spectral radius of A is defined as
ρ
(
A
)
=
max
{
|
λ
1
|
,
…
,
|
λ
n
|
}
.
{\displaystyle \rho (A)=\max \left\{|\lambda _{1}|,\dotsc ,|\lambda _{n}|\right\}.}
The spectral radius can be thought of as an infimum of all norms of a matrix. Indeed, on the one hand,
ρ
(
A
)
⩽
‖
A
‖
{\displaystyle \rho (A)\leqslant \|A\|}
for every natural matrix norm
‖
⋅
‖
{\displaystyle \|\cdot \|}
; and on the other hand, Gelfand's formula states that
ρ
(
A
)
=
lim
k
→
∞
‖
A
k
‖
1
/
k
{\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{1/k}}
. Both of these results are shown below.
However, the spectral radius does not necessarily satisfy
‖
A
v
‖
⩽
ρ
(
A
)
‖
v
‖
{\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}
for arbitrary vectors
v
∈
C
n
{\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}
. To see why, let
r
>
1
{\displaystyle r>1}
be arbitrary and consider the matrix
C
r
=
(
0
r
−
1
r
0
)
{\displaystyle C_{r}={\begin{pmatrix}0&r^{-1}\\r&0\end{pmatrix}}}
.
The characteristic polynomial of
C
r
{\displaystyle C_{r}}
is
λ
2
−
1
{\displaystyle \lambda ^{2}-1}
, so its eigenvalues are
{
−
1
,
1
}
{\displaystyle \{-1,1\}}
and thus
ρ
(
C
r
)
=
1
{\displaystyle \rho (C_{r})=1}
. However,
C
r
e
1
=
r
e
2
{\displaystyle C_{r}\mathbf {e} _{1}=r\mathbf {e} _{2}}
. As a result,
‖
C
r
e
1
‖
=
r
>
1
=
ρ
(
C
r
)
‖
e
1
‖
.
{\displaystyle \|C_{r}\mathbf {e} _{1}\|=r>1=\rho (C_{r})\|\mathbf {e} _{1}\|.}
As an illustration of Gelfand's formula, note that
‖
C
r
k
‖
1
/
k
→
1
{\displaystyle \|C_{r}^{k}\|^{1/k}\to 1}
as
k
→
∞
{\displaystyle k\to \infty }
, since
C
r
k
=
I
{\displaystyle C_{r}^{k}=I}
if
k
{\displaystyle k}
is even and
C
r
k
=
C
r
{\displaystyle C_{r}^{k}=C_{r}}
if
k
{\displaystyle k}
is odd.
A special case in which
‖
A
v
‖
⩽
ρ
(
A
)
‖
v
‖
{\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}
for all
v
∈
C
n
{\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}
is when
A
{\displaystyle A}
is a Hermitian matrix and
‖
⋅
‖
{\displaystyle \|\cdot \|}
is the Euclidean norm. This is because any Hermitian Matrix is diagonalizable by a unitary matrix, and unitary matrices preserve vector length. As a result,
‖
A
v
‖
=
‖
U
∗
D
U
v
‖
=
‖
D
U
v
‖
⩽
ρ
(
A
)
‖
U
v
‖
=
ρ
(
A
)
‖
v
‖
.
{\displaystyle \|A\mathbf {v} \|=\|U^{*}DU\mathbf {v} \|=\|DU\mathbf {v} \|\leqslant \rho (A)\|U\mathbf {v} \|=\rho (A)\|\mathbf {v} \|.}
= Bounded linear operators
=In the context of a bounded linear operator A on a Banach space, the eigenvalues need to be replaced with the elements of the spectrum of the operator, i.e. the values
λ
{\displaystyle \lambda }
for which
A
−
λ
I
{\displaystyle A-\lambda I}
is not bijective. We denote the spectrum by
σ
(
A
)
=
{
λ
∈
C
:
A
−
λ
I
is not bijective
}
{\displaystyle \sigma (A)=\left\{\lambda \in \mathbb {C} :A-\lambda I\;{\text{is not bijective}}\right\}}
The spectral radius is then defined as the supremum of the magnitudes of the elements of the spectrum:
ρ
(
A
)
=
sup
λ
∈
σ
(
A
)
|
λ
|
{\displaystyle \rho (A)=\sup _{\lambda \in \sigma (A)}|\lambda |}
Gelfand's formula, also known as the spectral radius formula, also holds for bounded linear operators: letting
‖
⋅
‖
{\displaystyle \|\cdot \|}
denote the operator norm, we have
ρ
(
A
)
=
lim
k
→
∞
‖
A
k
‖
1
k
=
inf
k
∈
N
∗
‖
A
k
‖
1
k
.
{\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{\frac {1}{k}}=\inf _{k\in \mathbb {N} ^{*}}\|A^{k}\|^{\frac {1}{k}}.}
A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius. An example of such an operator is a normal operator.
= Graphs
=The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix.
This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number C such that the degree of every vertex of the graph is smaller than C). In this case, for the graph G define:
ℓ
2
(
G
)
=
{
f
:
V
(
G
)
→
R
:
∑
v
∈
V
(
G
)
‖
f
(
v
)
2
‖
<
∞
}
.
{\displaystyle \ell ^{2}(G)=\left\{f:V(G)\to \mathbf {R} \ :\ \sum \nolimits _{v\in V(G)}\left\|f(v)^{2}\right\|<\infty \right\}.}
Let γ be the adjacency operator of G:
{
γ
:
ℓ
2
(
G
)
→
ℓ
2
(
G
)
(
γ
f
)
(
v
)
=
∑
(
u
,
v
)
∈
E
(
G
)
f
(
u
)
{\displaystyle {\begin{cases}\gamma :\ell ^{2}(G)\to \ell ^{2}(G)\\(\gamma f)(v)=\sum _{(u,v)\in E(G)}f(u)\end{cases}}}
The spectral radius of G is defined to be the spectral radius of the bounded linear operator γ.
Upper bounds
= Upper bounds on the spectral radius of a matrix
=The following proposition gives simple yet useful upper bounds on the spectral radius of a matrix.
Proposition. Let A ∈ Cn×n with spectral radius ρ(A) and a consistent matrix norm ||⋅||. Then for each integer
k
⩾
1
{\displaystyle k\geqslant 1}
:
ρ
(
A
)
≤
‖
A
k
‖
1
k
.
{\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}
Proof
Let (v, λ) be an eigenvector-eigenvalue pair for a matrix A. By the sub-multiplicativity of the matrix norm, we get:
|
λ
|
k
‖
v
‖
=
‖
λ
k
v
‖
=
‖
A
k
v
‖
≤
‖
A
k
‖
⋅
‖
v
‖
.
{\displaystyle |\lambda |^{k}\|\mathbf {v} \|=\|\lambda ^{k}\mathbf {v} \|=\|A^{k}\mathbf {v} \|\leq \|A^{k}\|\cdot \|\mathbf {v} \|.}
Since v ≠ 0, we have
|
λ
|
k
≤
‖
A
k
‖
{\displaystyle |\lambda |^{k}\leq \|A^{k}\|}
and therefore
ρ
(
A
)
≤
‖
A
k
‖
1
k
.
{\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}
concluding the proof.
= Upper bounds for spectral radius of a graph
=There are many upper bounds for the spectral radius of a graph in terms of its number n of vertices and its number m of edges. For instance, if
(
k
−
2
)
(
k
−
3
)
2
≤
m
−
n
≤
k
(
k
−
3
)
2
{\displaystyle {\frac {(k-2)(k-3)}{2}}\leq m-n\leq {\frac {k(k-3)}{2}}}
where
3
≤
k
≤
n
{\displaystyle 3\leq k\leq n}
is an integer, then
ρ
(
G
)
≤
2
m
−
n
−
k
+
5
2
+
2
m
−
2
n
+
9
4
{\displaystyle \rho (G)\leq {\sqrt {2m-n-k+{\frac {5}{2}}+{\sqrt {2m-2n+{\frac {9}{4}}}}}}}
Symmetric matrices
For real-valued matrices
A
{\displaystyle A}
the inequality
ρ
(
A
)
≤
‖
A
‖
2
{\displaystyle \rho (A)\leq {\|A\|}_{2}}
holds in particular, where
‖
⋅
‖
2
{\displaystyle {\|\cdot \|}_{2}}
denotes the spectral norm. In the case
where
A
{\displaystyle A}
is symmetric, this inequality is tight:
Theorem. Let
A
∈
R
n
×
n
{\displaystyle A\in \mathbb {R} ^{n\times n}}
be symmetric, i.e.,
A
=
A
T
.
{\displaystyle A=A^{T}.}
Then it holds that
ρ
(
A
)
=
‖
A
‖
2
.
{\displaystyle \rho (A)={\|A\|}_{2}.}
Proof
Let
(
v
i
,
λ
i
)
i
=
1
n
{\displaystyle (v_{i},\lambda _{i})_{i=1}^{n}}
be the eigenpairs of A. Due to the symmetry of A,
all
v
i
{\displaystyle v_{i}}
and
λ
i
{\displaystyle \lambda _{i}}
are real-valued and the eigenvectors
v
i
{\displaystyle v_{i}}
are orthonormal.
By the definition the spectral norm, there exists an
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
with
‖
x
‖
2
=
1
{\displaystyle {\|x\|}_{2}=1}
such that
‖
A
‖
2
=
‖
A
x
‖
2
.
{\displaystyle {\|A\|}_{2}={\|Ax\|}_{2}.}
Since the eigenvectors
v
i
{\displaystyle v_{i}}
form a basis
of
R
n
,
{\displaystyle \mathbb {R} ^{n},}
there exists
factors
δ
1
,
…
,
δ
n
∈
R
n
{\displaystyle \delta _{1},\ldots ,\delta _{n}\in \mathbb {R} ^{n}}
such that
x
=
∑
i
=
1
n
δ
i
v
i
{\displaystyle \textstyle x=\sum _{i=1}^{n}\delta _{i}v_{i}}
which implies that
A
x
=
∑
i
=
1
n
δ
i
A
v
i
=
∑
i
=
1
n
δ
i
λ
i
v
i
.
{\displaystyle Ax=\sum _{i=1}^{n}\delta _{i}Av_{i}=\sum _{i=1}^{n}\delta _{i}\lambda _{i}v_{i}.}
From the orthonormality of the eigenvectors
v
i
{\displaystyle v_{i}}
it follows that
‖
A
x
‖
2
=
‖
∑
i
=
1
n
δ
i
λ
i
v
i
‖
2
=
∑
i
=
1
n
|
δ
i
|
⋅
|
λ
i
|
⋅
‖
v
i
‖
2
=
∑
i
=
1
n
|
δ
i
|
⋅
|
λ
i
|
{\displaystyle {\|Ax\|}_{2}=\|\sum _{i=1}^{n}\delta _{i}\lambda _{i}v_{i}\|_{2}=\sum _{i=1}^{n}{|\delta _{i}|}\cdot {|\lambda _{i}|}\cdot {\|v_{i}\|}_{2}=\sum _{i=1}^{n}{|\delta _{i}|}\cdot {|\lambda _{i}|}}
and
‖
x
‖
2
=
‖
∑
i
=
1
n
δ
i
v
i
‖
2
=
∑
i
=
1
n
|
δ
i
|
⋅
‖
v
i
‖
2
=
∑
i
=
1
n
|
δ
i
|
.
{\displaystyle {\|x\|}_{2}=\|\sum _{i=1}^{n}\delta _{i}v_{i}\|_{2}=\sum _{i=1}^{n}{|\delta _{i}|}\cdot {\|v_{i}\|}_{2}=\sum _{i=1}^{n}{|\delta _{i}|}.}
Since
x
{\displaystyle x}
is chosen such that it maximizes
‖
A
x
‖
2
{\displaystyle {\|Ax\|}_{2}}
while satisfying
‖
x
‖
2
=
1
,
{\displaystyle {\|x\|}_{2}=1,}
the values of
δ
i
{\displaystyle \delta _{i}}
must be such that they maximize
∑
i
=
1
n
|
δ
i
|
⋅
|
λ
i
|
{\displaystyle \textstyle \sum _{i=1}^{n}{|\delta _{i}|}\cdot {|\lambda _{i}|}}
while satisfying
∑
i
=
1
n
|
δ
i
|
=
1.
{\displaystyle \textstyle \sum _{i=1}^{n}{|\delta _{i}|}=1.}
This is achieved by setting
δ
k
=
1
{\displaystyle \delta _{k}=1}
for
k
=
a
r
g
m
a
x
i
=
1
n
|
λ
i
|
{\displaystyle k=\mathrm {arg\,max} _{i=1}^{n}{|\lambda _{i}|}}
and
δ
i
=
0
{\displaystyle \delta _{i}=0}
otherwise, yielding a value of
‖
A
x
‖
2
=
|
λ
k
|
=
ρ
(
A
)
.
{\displaystyle {\|Ax\|}_{2}={|\lambda _{k}|}=\rho (A).}
Power sequence
The spectral radius is closely related to the behavior of the convergence of the power sequence of a matrix; namely as shown by the following theorem.
Theorem. Let A ∈ Cn×n with spectral radius ρ(A). Then ρ(A) < 1 if and only if
lim
k
→
∞
A
k
=
0.
{\displaystyle \lim _{k\to \infty }A^{k}=0.}
On the other hand, if ρ(A) > 1,
lim
k
→
∞
‖
A
k
‖
=
∞
{\displaystyle \lim _{k\to \infty }\|A^{k}\|=\infty }
. The statement holds for any choice of matrix norm on Cn×n.
Proof
Assume that
A
k
{\displaystyle A^{k}}
goes to zero as
k
{\displaystyle k}
goes to infinity. We will show that ρ(A) < 1. Let (v, λ) be an eigenvector-eigenvalue pair for A. Since Akv = λkv, we have
0
=
(
lim
k
→
∞
A
k
)
v
=
lim
k
→
∞
(
A
k
v
)
=
lim
k
→
∞
λ
k
v
=
v
lim
k
→
∞
λ
k
{\displaystyle {\begin{aligned}0&=\left(\lim _{k\to \infty }A^{k}\right)\mathbf {v} \\&=\lim _{k\to \infty }\left(A^{k}\mathbf {v} \right)\\&=\lim _{k\to \infty }\lambda ^{k}\mathbf {v} \\&=\mathbf {v} \lim _{k\to \infty }\lambda ^{k}\end{aligned}}}
Since v ≠ 0 by hypothesis, we must have
lim
k
→
∞
λ
k
=
0
,
{\displaystyle \lim _{k\to \infty }\lambda ^{k}=0,}
which implies
|
λ
|
<
1
{\displaystyle |\lambda |<1}
. Since this must be true for any eigenvalue
λ
{\displaystyle \lambda }
, we can conclude that ρ(A) < 1.
Now, assume the radius of A is less than 1. From the Jordan normal form theorem, we know that for all A ∈ Cn×n, there exist V, J ∈ Cn×n with V non-singular and J block diagonal such that:
A
=
V
J
V
−
1
{\displaystyle A=VJV^{-1}}
with
J
=
[
J
m
1
(
λ
1
)
0
0
⋯
0
0
J
m
2
(
λ
2
)
0
⋯
0
⋮
⋯
⋱
⋯
⋮
0
⋯
0
J
m
s
−
1
(
λ
s
−
1
)
0
0
⋯
⋯
0
J
m
s
(
λ
s
)
]
{\displaystyle J={\begin{bmatrix}J_{m_{1}}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}(\lambda _{s})\end{bmatrix}}}
where
J
m
i
(
λ
i
)
=
[
λ
i
1
0
⋯
0
0
λ
i
1
⋯
0
⋮
⋮
⋱
⋱
⋮
0
0
⋯
λ
i
1
0
0
⋯
0
λ
i
]
∈
C
m
i
×
m
i
,
1
≤
i
≤
s
.
{\displaystyle J_{m_{i}}(\lambda _{i})={\begin{bmatrix}\lambda _{i}&1&0&\cdots &0\\0&\lambda _{i}&1&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}&1\\0&0&\cdots &0&\lambda _{i}\end{bmatrix}}\in \mathbf {C} ^{m_{i}\times m_{i}},1\leq i\leq s.}
It is easy to see that
A
k
=
V
J
k
V
−
1
{\displaystyle A^{k}=VJ^{k}V^{-1}}
and, since J is block-diagonal,
J
k
=
[
J
m
1
k
(
λ
1
)
0
0
⋯
0
0
J
m
2
k
(
λ
2
)
0
⋯
0
⋮
⋯
⋱
⋯
⋮
0
⋯
0
J
m
s
−
1
k
(
λ
s
−
1
)
0
0
⋯
⋯
0
J
m
s
k
(
λ
s
)
]
{\displaystyle J^{k}={\begin{bmatrix}J_{m_{1}}^{k}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}^{k}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}^{k}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}^{k}(\lambda _{s})\end{bmatrix}}}
Now, a standard result on the k-power of an
m
i
×
m
i
{\displaystyle m_{i}\times m_{i}}
Jordan block states that, for
k
≥
m
i
−
1
{\displaystyle k\geq m_{i}-1}
:
J
m
i
k
(
λ
i
)
=
[
λ
i
k
(
k
1
)
λ
i
k
−
1
(
k
2
)
λ
i
k
−
2
⋯
(
k
m
i
−
1
)
λ
i
k
−
m
i
+
1
0
λ
i
k
(
k
1
)
λ
i
k
−
1
⋯
(
k
m
i
−
2
)
λ
i
k
−
m
i
+
2
⋮
⋮
⋱
⋱
⋮
0
0
⋯
λ
i
k
(
k
1
)
λ
i
k
−
1
0
0
⋯
0
λ
i
k
]
{\displaystyle J_{m_{i}}^{k}(\lambda _{i})={\begin{bmatrix}\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&{k \choose 2}\lambda _{i}^{k-2}&\cdots &{k \choose m_{i}-1}\lambda _{i}^{k-m_{i}+1}\\0&\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&\cdots &{k \choose m_{i}-2}\lambda _{i}^{k-m_{i}+2}\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}\\0&0&\cdots &0&\lambda _{i}^{k}\end{bmatrix}}}
Thus, if
ρ
(
A
)
<
1
{\displaystyle \rho (A)<1}
then for all i
|
λ
i
|
<
1
{\displaystyle |\lambda _{i}|<1}
. Hence for all i we have:
lim
k
→
∞
J
m
i
k
=
0
{\displaystyle \lim _{k\to \infty }J_{m_{i}}^{k}=0}
which implies
lim
k
→
∞
J
k
=
0.
{\displaystyle \lim _{k\to \infty }J^{k}=0.}
Therefore,
lim
k
→
∞
A
k
=
lim
k
→
∞
V
J
k
V
−
1
=
V
(
lim
k
→
∞
J
k
)
V
−
1
=
0
{\displaystyle \lim _{k\to \infty }A^{k}=\lim _{k\to \infty }VJ^{k}V^{-1}=V\left(\lim _{k\to \infty }J^{k}\right)V^{-1}=0}
On the other side, if
ρ
(
A
)
>
1
{\displaystyle \rho (A)>1}
, there is at least one element in J that does not remain bounded as k increases, thereby proving the second part of the statement.
Gelfand's formula
Gelfand's formula, named after Israel Gelfand, gives the spectral radius as a limit of matrix norms.
= Theorem
=For any matrix norm ||⋅||, we have
ρ
(
A
)
=
lim
k
→
∞
‖
A
k
‖
1
k
{\displaystyle \rho (A)=\lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}}
.
Moreover, in the case of a consistent matrix norm
lim
k
→
∞
‖
A
k
‖
1
k
{\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}}
approaches
ρ
(
A
)
{\displaystyle \rho (A)}
from above (indeed, in that case
ρ
(
A
)
≤
‖
A
k
‖
1
k
{\displaystyle \rho (A)\leq \left\|A^{k}\right\|^{\frac {1}{k}}}
for all
k
{\displaystyle k}
).
= Proof
=For any ε > 0, let us define the two following matrices:
A
±
=
1
ρ
(
A
)
±
ε
A
.
{\displaystyle A_{\pm }={\frac {1}{\rho (A)\pm \varepsilon }}A.}
Thus,
ρ
(
A
±
)
=
ρ
(
A
)
ρ
(
A
)
±
ε
,
ρ
(
A
+
)
<
1
<
ρ
(
A
−
)
.
{\displaystyle \rho \left(A_{\pm }\right)={\frac {\rho (A)}{\rho (A)\pm \varepsilon }},\qquad \rho (A_{+})<1<\rho (A_{-}).}
We start by applying the previous theorem on limits of power sequences to A+:
lim
k
→
∞
A
+
k
=
0.
{\displaystyle \lim _{k\to \infty }A_{+}^{k}=0.}
This shows the existence of N+ ∈ N such that, for all k ≥ N+,
‖
A
+
k
‖
<
1.
{\displaystyle \left\|A_{+}^{k}\right\|<1.}
Therefore,
‖
A
k
‖
1
k
<
ρ
(
A
)
+
ε
.
{\displaystyle \left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon .}
Similarly, the theorem on power sequences implies that
‖
A
−
k
‖
{\displaystyle \|A_{-}^{k}\|}
is not bounded and that there exists N− ∈ N such that, for all k ≥ N−,
‖
A
−
k
‖
>
1.
{\displaystyle \left\|A_{-}^{k}\right\|>1.}
Therefore,
‖
A
k
‖
1
k
>
ρ
(
A
)
−
ε
.
{\displaystyle \left\|A^{k}\right\|^{\frac {1}{k}}>\rho (A)-\varepsilon .}
Let N = max{N+, N−}. Then,
∀
ε
>
0
∃
N
∈
N
∀
k
≥
N
ρ
(
A
)
−
ε
<
‖
A
k
‖
1
k
<
ρ
(
A
)
+
ε
,
{\displaystyle \forall \varepsilon >0\quad \exists N\in \mathbf {N} \quad \forall k\geq N\quad \rho (A)-\varepsilon <\left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon ,}
that is,
lim
k
→
∞
‖
A
k
‖
1
k
=
ρ
(
A
)
.
{\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}=\rho (A).}
This concludes the proof.
= Corollary
=Gelfand's formula yields a bound on the spectral radius of a product of commuting matrices: if
A
1
,
…
,
A
n
{\displaystyle A_{1},\ldots ,A_{n}}
are matrices that all commute, then
ρ
(
A
1
⋯
A
n
)
≤
ρ
(
A
1
)
⋯
ρ
(
A
n
)
.
{\displaystyle \rho (A_{1}\cdots A_{n})\leq \rho (A_{1})\cdots \rho (A_{n}).}
= Numerical example
=Consider the matrix
A
=
[
9
−
1
2
−
2
8
4
1
1
8
]
{\displaystyle A={\begin{bmatrix}9&-1&2\\-2&8&4\\1&1&8\end{bmatrix}}}
whose eigenvalues are 5, 10, 10; by definition, ρ(A) = 10. In the following table, the values of
‖
A
k
‖
1
k
{\displaystyle \|A^{k}\|^{\frac {1}{k}}}
for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,
‖
.
‖
1
=
‖
.
‖
∞
{\displaystyle \|.\|_{1}=\|.\|_{\infty }}
):
Notes and references
Bibliography
Dunford, Nelson; Schwartz, Jacob (1963), Linear operators II. Spectral Theory: Self Adjoint Operators in Hilbert Space, Interscience Publishers, Inc.
Lax, Peter D. (2002), Functional Analysis, Wiley-Interscience, ISBN 0-471-55604-1
See also
Spectral gap
The Joint spectral radius is a generalization of the spectral radius to sets of matrices.
Spectrum of a matrix
Spectral abscissa
Kata Kunci Pencarian:
- Jari-jari Bohr
- Daftar bintang terbesar
- RX J1856.5–3754
- Sagittarius A*
- VY Canis Majoris
- KY Cygni
- Eurofighter Typhoon
- Klasifikasi bintang
- Bintang
- Katalog Masukan Kepler
- Spectral radius
- Matrix norm
- Joint spectral radius
- Smith graph
- Spectral gap
- Spectrum (functional analysis)
- Spectrum of a matrix
- Adjacency matrix
- Banach algebra
- Vanishing gradient problem