- Source: Householder transformation
In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.
Its analogue over general inner product spaces is the Householder operator.
Definition
= Transformation
=The reflection hyperplane can be defined by its normal vector, a unit vector
v
{\textstyle v}
(a vector with length
1
{\textstyle 1}
) that is orthogonal to the hyperplane. The reflection of a point
x
{\textstyle x}
about this hyperplane is the linear transformation:
x
−
2
⟨
x
,
v
⟩
v
=
x
−
2
v
(
v
∗
x
)
,
{\displaystyle x-2\langle x,v\rangle v=x-2v\left(v^{*}x\right),}
where
v
{\textstyle v}
is given as a column unit vector with conjugate transpose
v
*
{\textstyle v^{\textsf {*}}}
.
= Householder matrix
=The matrix constructed from this transformation can be expressed in terms of an outer product as:
P
=
I
−
2
v
v
∗
{\displaystyle P=I-2vv^{*}}
is known as the Householder matrix, where
I
{\textstyle I}
is the identity matrix.
Properties
The Householder matrix has the following properties:
it is Hermitian:
P
=
P
∗
{\textstyle P=P^{*}}
,
it is unitary:
P
−
1
=
P
∗
{\textstyle P^{-1}=P^{*}}
,
hence it is involutory:
P
=
P
−
1
{\textstyle P=P^{-1}}
.
A Householder matrix has eigenvalues
±
1
{\textstyle \pm 1}
. To see this, notice that if
u
{\textstyle u}
is orthogonal to the vector
v
{\textstyle v}
which was used to create the reflector, then
P
u
=
u
{\textstyle Pu=u}
, i.e.,
1
{\textstyle 1}
is an eigenvalue of multiplicity
n
−
1
{\textstyle n-1}
, since there are
n
−
1
{\textstyle n-1}
independent vectors orthogonal to
v
{\textstyle v}
. Also, notice
P
v
=
−
v
{\textstyle Pv=-v}
, and so
−
1
{\textstyle -1}
is an eigenvalue with multiplicity
1
{\textstyle 1}
.
The determinant of a Householder reflector is
−
1
{\textstyle -1}
, since the determinant of a matrix is the product of its eigenvalues, in this case one of which is
−
1
{\textstyle -1}
with the remainder being
1
{\textstyle 1}
(as in the previous point).
Applications
= Geometric optics
=In geometric optics, specular reflection can be expressed in terms of the Householder matrix (see Specular reflection § Vector formulation).
= Numerical linear algebra
=Householder transformations are widely used in numerical linear algebra, for example, to annihilate the entries below the main diagonal of a matrix, to perform QR decompositions and in the first step of the QR algorithm. They are also widely used for transforming to a Hessenberg form. For symmetric or Hermitian matrices, the symmetry can be preserved, resulting in tridiagonalization.
QR decomposition
Householder reflections can be used to calculate QR decompositions by reflecting first one column of a matrix onto a multiple of a standard basis vector, calculating the transformation matrix, multiplying it with the original matrix and then recursing down the
(
i
,
i
)
{\textstyle (i,i)}
minors of that product. To accomplish this, a hermitian unitary matrix Q is sought which takes a complex vector x into a complex multiple of a complex vector e. For the QR decomposition, e will be a unit coordinate vector, say for the kth coordinate. A complex matrix Q having the form Q = I - u u* with u* u = 2 has the desired property of being hermitian unitary. Here * denotes the conjugate transpose. Since the only two vectors involved are x and e, the vector u must have the form u = a x + b e, where a and b are complex coefficients to be determined. Since an overall phase factor for u does not matter, the coefficient a can be chosen to be positive real. Now Q x = x (1 – a (u* x)) - e (b (u* x)). For the coefficient of the vector x to be zero, the two terms in u* x must have the same phase within a multiple of 180 degrees, so we must have arg(b) = arg(e* x) within a multiple of 180 degrees. There are two solutions according to whether an even or odd multiple of 180 degrees is chosen. So for the complex coefficient of the vector x to be zero, two linear equations in the moduli of a and b must be solved. The phases of a and b have already been determined. Suppose e to be the kth unit coordinate vector so that e* e = 1 and xk= e* x and let |x|= sqrt(x* x). Then a and b may be expressed simply either as a =1/sqrt (|x| (|x|+ |xk|)) and b = a |x| exp(i*arg(xk)) or as a =1/sqrt (|x| (|x|- |xk|)) and b = - a |x| exp(i*arg(xk)). The multiplier of e is -b/a for both solutions. The first solution is better because the denominator cannot be near zero compared to |x|.
Tridiagonalization
This procedure is presented in Numerical Analysis by Burden and Faires. It uses a slightly altered
sgn
{\displaystyle \operatorname {sgn} }
function with
sgn
(
0
)
=
1
{\displaystyle \operatorname {sgn} (0)=1}
.
In the first step, to form the Householder matrix in each step we need to determine
α
{\textstyle \alpha }
and
r
{\textstyle r}
, which are:
α
=
−
sgn
(
a
21
)
∑
j
=
2
n
a
j
1
2
;
r
=
1
2
(
α
2
−
a
21
α
)
;
{\displaystyle {\begin{aligned}\alpha &=-\operatorname {sgn} \left(a_{21}\right){\sqrt {\sum _{j=2}^{n}a_{j1}^{2}}};\\r&={\sqrt {{\frac {1}{2}}\left(\alpha ^{2}-a_{21}\alpha \right)}};\end{aligned}}}
From
α
{\textstyle \alpha }
and
r
{\textstyle r}
, construct vector
v
{\textstyle v}
:
v
(
1
)
=
[
v
1
v
2
⋮
v
n
]
,
{\displaystyle v^{(1)}={\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}},}
where
v
1
=
0
{\textstyle v_{1}=0}
,
v
2
=
a
21
−
α
2
r
{\textstyle v_{2}={\frac {a_{21}-\alpha }{2r}}}
, and
v
k
=
a
k
1
2
r
{\displaystyle v_{k}={\frac {a_{k1}}{2r}}}
for each
k
=
3
,
4
…
n
{\displaystyle k=3,4\ldots n}
Then compute:
P
1
=
I
−
2
v
(
1
)
(
v
(
1
)
)
T
A
(
2
)
=
P
1
A
P
1
{\displaystyle {\begin{aligned}P^{1}&=I-2v^{(1)}\left(v^{(1)}\right)^{\textsf {T}}\\A^{(2)}&=P^{1}AP^{1}\end{aligned}}}
Having found
P
1
{\textstyle P^{1}}
and computed
A
(
2
)
{\textstyle A^{(2)}}
the process is repeated for
k
=
2
,
3
,
…
,
n
−
2
{\textstyle k=2,3,\ldots ,n-2}
as follows:
α
=
−
sgn
(
a
k
+
1
,
k
k
)
∑
j
=
k
+
1
n
(
a
j
k
k
)
2
r
=
1
2
(
α
2
−
a
k
+
1
,
k
k
α
)
v
1
k
=
v
2
k
=
⋯
=
v
k
k
=
0
v
k
+
1
k
=
a
k
+
1
,
k
k
−
α
2
r
v
j
k
=
a
j
k
k
2
r
for
j
=
k
+
2
,
k
+
3
,
…
,
n
P
k
=
I
−
2
v
(
k
)
(
v
(
k
)
)
T
A
(
k
+
1
)
=
P
k
A
(
k
)
P
k
{\displaystyle {\begin{aligned}\alpha &=-\operatorname {sgn} \left(a_{k+1,k}^{k}\right){\sqrt {\sum _{j=k+1}^{n}\left(a_{jk}^{k}\right)^{2}}}\\[2pt]r&={\sqrt {{\frac {1}{2}}\left(\alpha ^{2}-a_{k+1,k}^{k}\alpha \right)}}\\[2pt]v_{1}^{k}&=v_{2}^{k}=\cdots =v_{k}^{k}=0\\[2pt]v_{k+1}^{k}&={\frac {a_{k+1,k}^{k}-\alpha }{2r}}\\v_{j}^{k}&={\frac {a_{jk}^{k}}{2r}}{\text{ for }}j=k+2,\ k+3,\ \ldots ,\ n\\P^{k}&=I-2v^{(k)}\left(v^{(k)}\right)^{\textsf {T}}\\A^{(k+1)}&=P^{k}A^{(k)}P^{k}\end{aligned}}}
Continuing in this manner, the tridiagonal and symmetric matrix is formed.
Examples
In this example, also from Burden and Faires, the given matrix is transformed to the similar tridiagonal matrix A3 by using the Householder method.
A
=
[
4
1
−
2
2
1
2
0
1
−
2
0
3
−
2
2
1
−
2
−
1
]
,
{\displaystyle \mathbf {A} ={\begin{bmatrix}4&1&-2&2\\1&2&0&1\\-2&0&3&-2\\2&1&-2&-1\end{bmatrix}},}
Following those steps in the Householder method, we have:
The first Householder matrix:
Q
1
=
[
1
0
0
0
0
−
1
3
2
3
−
2
3
0
2
3
2
3
1
3
0
−
2
3
1
3
2
3
]
,
A
2
=
Q
1
A
Q
1
=
[
4
−
3
0
0
−
3
10
3
1
4
3
0
1
5
3
−
4
3
0
4
3
−
4
3
−
1
]
,
{\displaystyle {\begin{aligned}Q_{1}&={\begin{bmatrix}1&0&0&0\\0&-{\frac {1}{3}}&{\frac {2}{3}}&-{\frac {2}{3}}\\0&{\frac {2}{3}}&{\frac {2}{3}}&{\frac {1}{3}}\\0&-{\frac {2}{3}}&{\frac {1}{3}}&{\frac {2}{3}}\end{bmatrix}},\\A_{2}=Q_{1}AQ_{1}&={\begin{bmatrix}4&-3&0&0\\-3&{\frac {10}{3}}&1&{\frac {4}{3}}\\0&1&{\frac {5}{3}}&-{\frac {4}{3}}\\0&{\frac {4}{3}}&-{\frac {4}{3}}&-1\end{bmatrix}},\end{aligned}}}
Used
A
2
{\textstyle A_{2}}
to form
Q
2
=
[
1
0
0
0
0
1
0
0
0
0
−
3
5
−
4
5
0
0
−
4
5
3
5
]
,
A
3
=
Q
2
A
2
Q
2
=
[
4
−
3
0
0
−
3
10
3
−
5
3
0
0
−
5
3
−
33
25
68
75
0
0
68
75
149
75
]
,
{\displaystyle {\begin{aligned}Q_{2}&={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&-{\frac {3}{5}}&-{\frac {4}{5}}\\0&0&-{\frac {4}{5}}&{\frac {3}{5}}\end{bmatrix}},\\A_{3}=Q_{2}A_{2}Q_{2}&={\begin{bmatrix}4&-3&0&0\\-3&{\frac {10}{3}}&-{\frac {5}{3}}&0\\0&-{\frac {5}{3}}&-{\frac {33}{25}}&{\frac {68}{75}}\\0&0&{\frac {68}{75}}&{\frac {149}{75}}\end{bmatrix}},\end{aligned}}}
As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process is finished after two steps.
Computational and theoretical relationship to other unitary transformations
The Householder transformation is a reflection about a hyperplane with unit normal vector
v
{\textstyle v}
, as stated earlier. An
N
{\textstyle N}
-by-
N
{\textstyle N}
unitary transformation
U
{\textstyle U}
satisfies
U
U
∗
=
I
{\textstyle UU^{*}=I}
. Taking the determinant (
N
{\textstyle N}
-th power of the geometric mean) and trace (proportional to arithmetic mean) of a unitary matrix reveals that its eigenvalues
λ
i
{\textstyle \lambda _{i}}
have unit modulus. This can be seen directly and swiftly:
Trace
(
U
U
∗
)
N
=
∑
j
=
1
N
|
λ
j
|
2
N
=
1
,
det
(
U
U
∗
)
=
∏
j
=
1
N
|
λ
j
|
2
=
1.
{\displaystyle {\begin{aligned}{\frac {\operatorname {Trace} \left(UU^{*}\right)}{N}}&={\frac {\sum _{j=1}^{N}\left|\lambda _{j}\right|^{2}}{N}}=1,&\operatorname {det} \left(UU^{*}\right)&=\prod _{j=1}^{N}\left|\lambda _{j}\right|^{2}=1.\end{aligned}}}
Since arithmetic and geometric means are equal if the variables are constant (see inequality of arithmetic and geometric means), we establish the claim of unit modulus.
For the case of real valued unitary matrices we obtain orthogonal matrices,
U
U
T
=
I
{\textstyle UU^{\textsf {T}}=I}
. It follows rather readily (see orthogonal matrix) that any orthogonal matrix can be decomposed into a product of 2 by 2 rotations, called Givens Rotations, and Householder reflections. This is appealing intuitively since multiplication of a vector by an orthogonal matrix preserves the length of that vector, and rotations and reflections exhaust the set of (real valued) geometric operations that render invariant a vector's length.
The Householder transformation was shown to have a one-to-one relationship with the canonical coset decomposition of unitary matrices defined in group theory, which can be used to parametrize unitary operators in a very efficient manner.
Finally we note that a single Householder transform, unlike a solitary Givens transform, can act on all columns of a matrix, and as such exhibits the lowest computational cost for QR decomposition and tridiagonalization. The penalty for this "computational optimality" is, of course, that Householder operations cannot be as deeply or efficiently parallelized. As such Householder is preferred for dense matrices on sequential machines, whilst Givens is preferred on sparse matrices, and/or parallel machines.
See also
Block reflector
Givens rotation
Jacobi rotation
Notes
References
LaBudde, C.D. (1963). "The reduction of an arbitrary real square matrix to tridiagonal form using similarity transformations". Mathematics of Computation. 17 (84). American Mathematical Society: 433–437. doi:10.2307/2004005. JSTOR 2004005. MR 0156455.
Morrison, D.D. (1960). "Remarks on the Unitary Triangularization of a Nonsymmetric Matrix". Journal of the ACM. 7 (2): 185–186. doi:10.1145/321021.321030. MR 0114291. S2CID 23361868.
Cipra, Barry A. (2000). "The Best of the 20th Century: Editors Name Top 10 Algorithms". SIAM News. 33 (4): 1. (Herein Householder Transformation is cited as a top 10 algorithm of this century)
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 11.3.2. Householder Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-13.
Kata Kunci Pencarian:
- Nilai dan vektor eigen
- Makna kehidupan
- Householder transformation
- Householder
- Alston Scott Householder
- QR decomposition
- Hessenberg matrix
- Block reflector
- Orthogonalization
- Identity matrix
- Transformation matrix
- Householder operator