• Source: Divide-and-conquer eigenvalue algorithm
    • Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently (circa 1990s) become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is the divide-and-conquer approach from computer science. An eigenvalue problem is divided into two problems of roughly half the size, each of these are solved recursively, and the eigenvalues of the original problem are computed from the results of these smaller problems.
      This article covers the basic idea of the algorithm as originally proposed by Cuppen in 1981, which is not numerically stable without additional refinements.


      Background


      As with most eigenvalue algorithms for Hermitian matrices, divide-and-conquer begins with a reduction to tridiagonal form. For an



      m
      ×
      m


      {\displaystyle m\times m}

      matrix, the standard method for this, via Householder reflections, takes





      4
      3



      m

      3




      {\displaystyle {\frac {4}{3}}m^{3}}

      floating point operations, or





      8
      3



      m

      3




      {\displaystyle {\frac {8}{3}}m^{3}}

      if eigenvectors are needed as well. There are other algorithms, such as the Arnoldi iteration, which may do better for certain classes of matrices; we will not consider this further here.
      In certain cases, it is possible to deflate an eigenvalue problem into smaller problems. Consider a block diagonal matrix




      T
      =


      [




      T

      1




      0




      0



      T

      2





      ]


      .


      {\displaystyle T={\begin{bmatrix}T_{1}&0\\0&T_{2}\end{bmatrix}}.}


      The eigenvalues and eigenvectors of



      T


      {\displaystyle T}

      are simply those of




      T

      1




      {\displaystyle T_{1}}

      and




      T

      2




      {\displaystyle T_{2}}

      , and it will almost always be faster to solve these two smaller problems than to solve the original problem all at once. This technique can be used to improve the efficiency of many eigenvalue algorithms, but it has special significance to divide-and-conquer.
      For the rest of this article, we will assume the input to the divide-and-conquer algorithm is an



      m
      ×
      m


      {\displaystyle m\times m}

      real symmetric tridiagonal matrix



      T


      {\displaystyle T}

      . The algorithm can be modified for Hermitian matrices.


      Divide


      The divide part of the divide-and-conquer algorithm comes from the realization that a tridiagonal matrix is "almost" block diagonal.

      The size of submatrix




      T

      1




      {\displaystyle T_{1}}

      we will call



      n
      ×
      n


      {\displaystyle n\times n}

      , and then




      T

      2




      {\displaystyle T_{2}}

      is



      (
      m

      n
      )
      ×
      (
      m

      n
      )


      {\displaystyle (m-n)\times (m-n)}

      .



      T


      {\displaystyle T}

      is almost block diagonal regardless of how



      n


      {\displaystyle n}

      is chosen. For efficiency we typically choose



      n

      m

      /

      2


      {\displaystyle n\approx m/2}

      .
      We write



      T


      {\displaystyle T}

      as a block diagonal matrix, plus a rank-1 correction:

      The only difference between




      T

      1




      {\displaystyle T_{1}}

      and







      T
      ^




      1




      {\displaystyle {\hat {T}}_{1}}

      is that the lower right entry




      t

      n
      n




      {\displaystyle t_{nn}}

      in







      T
      ^




      1




      {\displaystyle {\hat {T}}_{1}}

      has been replaced with




      t

      n
      n



      β


      {\displaystyle t_{nn}-\beta }

      and similarly, in







      T
      ^




      2




      {\displaystyle {\hat {T}}_{2}}

      the top left entry




      t

      n
      +
      1
      ,
      n
      +
      1




      {\displaystyle t_{n+1,n+1}}

      has been replaced with




      t

      n
      +
      1
      ,
      n
      +
      1



      β


      {\displaystyle t_{n+1,n+1}-\beta }

      .
      The remainder of the divide step is to solve for the eigenvalues (and if desired the eigenvectors) of







      T
      ^




      1




      {\displaystyle {\hat {T}}_{1}}

      and







      T
      ^




      2




      {\displaystyle {\hat {T}}_{2}}

      , that is to find the diagonalizations







      T
      ^




      1


      =

      Q

      1



      D

      1



      Q

      1


      T




      {\displaystyle {\hat {T}}_{1}=Q_{1}D_{1}Q_{1}^{T}}

      and







      T
      ^




      2


      =

      Q

      2



      D

      2



      Q

      2


      T




      {\displaystyle {\hat {T}}_{2}=Q_{2}D_{2}Q_{2}^{T}}

      . This can be accomplished with recursive calls to the divide-and-conquer algorithm, although practical implementations often switch to the QR algorithm for small enough submatrices.


      Conquer


      The conquer part of the algorithm is the unintuitive part. Given the diagonalizations of the submatrices, calculated above, how do we find the diagonalization of the original matrix?
      First, define




      z

      T


      =
      (

      q

      1


      T


      ,

      q

      2


      T


      )


      {\displaystyle z^{T}=(q_{1}^{T},q_{2}^{T})}

      , where




      q

      1


      T




      {\displaystyle q_{1}^{T}}

      is the last row of




      Q

      1




      {\displaystyle Q_{1}}

      and




      q

      2


      T




      {\displaystyle q_{2}^{T}}

      is the first row of




      Q

      2




      {\displaystyle Q_{2}}

      . It is now elementary to show that




      T
      =


      [




      Q

      1









      Q

      2





      ]



      (



      [




      D

      1









      D

      2





      ]


      +
      β
      z

      z

      T



      )



      [




      Q

      1


      T









      Q

      2


      T





      ]




      {\displaystyle T={\begin{bmatrix}Q_{1}&\\&Q_{2}\end{bmatrix}}\left({\begin{bmatrix}D_{1}&\\&D_{2}\end{bmatrix}}+\beta zz^{T}\right){\begin{bmatrix}Q_{1}^{T}&\\&Q_{2}^{T}\end{bmatrix}}}


      The remaining task has been reduced to finding the eigenvalues of a diagonal matrix plus a rank-one correction. Before showing how to do this, let us simplify the notation. We are looking for the eigenvalues of the matrix



      D
      +
      w

      w

      T




      {\displaystyle D+ww^{T}}

      , where



      D


      {\displaystyle D}

      is diagonal with distinct entries and



      w


      {\displaystyle w}

      is any vector with nonzero entries. In this case



      w
      =



      |

      β

      |




      z


      {\displaystyle w={\sqrt {|\beta |}}\cdot z}

      .
      The case of a zero entry is simple, since if wi is zero, (




      e

      i




      {\displaystyle e_{i}}

      ,di) is an eigenpair (




      e

      i




      {\displaystyle e_{i}}

      is in the standard basis) of



      D
      +
      w

      w

      T




      {\displaystyle D+ww^{T}}

      since




      (
      D
      +
      w

      w

      T


      )

      e

      i


      =
      D

      e

      i


      =

      d

      i



      e

      i




      {\displaystyle (D+ww^{T})e_{i}=De_{i}=d_{i}e_{i}}

      .
      If



      λ


      {\displaystyle \lambda }

      is an eigenvalue, we have:




      (
      D
      +
      w

      w

      T


      )
      q
      =
      λ
      q


      {\displaystyle (D+ww^{T})q=\lambda q}


      where



      q


      {\displaystyle q}

      is the corresponding eigenvector. Now




      (
      D

      λ
      I
      )
      q
      +
      w
      (

      w

      T


      q
      )
      =
      0


      {\displaystyle (D-\lambda I)q+w(w^{T}q)=0}





      q
      +
      (
      D

      λ
      I

      )


      1


      w
      (

      w

      T


      q
      )
      =
      0


      {\displaystyle q+(D-\lambda I)^{-1}w(w^{T}q)=0}






      w

      T


      q
      +

      w

      T


      (
      D

      λ
      I

      )


      1


      w
      (

      w

      T


      q
      )
      =
      0


      {\displaystyle w^{T}q+w^{T}(D-\lambda I)^{-1}w(w^{T}q)=0}


      Keep in mind that




      w

      T


      q


      {\displaystyle w^{T}q}

      is a nonzero scalar. Neither



      w


      {\displaystyle w}

      nor



      q


      {\displaystyle q}

      are zero. If




      w

      T


      q


      {\displaystyle w^{T}q}

      were to be zero,



      q


      {\displaystyle q}

      would be an eigenvector of



      D


      {\displaystyle D}

      by



      (
      D
      +
      w

      w

      T


      )
      q
      =
      λ
      q


      {\displaystyle (D+ww^{T})q=\lambda q}

      . If that were the case,



      q


      {\displaystyle q}

      would contain only one nonzero position since



      D


      {\displaystyle D}

      is distinct diagonal and thus the inner product




      w

      T


      q


      {\displaystyle w^{T}q}

      can not be zero after all. Therefore, we have:




      1
      +

      w

      T


      (
      D

      λ
      I

      )


      1


      w
      =
      0


      {\displaystyle 1+w^{T}(D-\lambda I)^{-1}w=0}


      or written as a scalar equation,




      1
      +



      j
      =
      1


      m





      w

      j


      2




      d

      j



      λ



      =
      0.


      {\displaystyle 1+\sum _{j=1}^{m}{\frac {w_{j}^{2}}{d_{j}-\lambda }}=0.}


      This equation is known as the secular equation. The problem has therefore been reduced to finding the roots of the rational function defined by the left-hand side of this equation.
      All general eigenvalue algorithms must be iterative, and the divide-and-conquer algorithm is no different. Solving the nonlinear secular equation requires an iterative technique, such as the Newton–Raphson method. However, each root can be found in O(1) iterations, each of which requires



      Θ
      (
      m
      )


      {\displaystyle \Theta (m)}

      flops (for an



      m


      {\displaystyle m}

      -degree rational function), making the cost of the iterative part of this algorithm



      Θ
      (

      m

      2


      )


      {\displaystyle \Theta (m^{2})}

      .


      Analysis


      W will use the master theorem for divide-and-conquer recurrences to analyze the running time. Remember that above we stated we choose



      n

      m

      /

      2


      {\displaystyle n\approx m/2}

      . We can write the recurrence relation:




      T
      (
      m
      )
      =
      2
      ×
      T

      (


      m
      2


      )

      +
      Θ
      (

      m

      2


      )


      {\displaystyle T(m)=2\times T\left({\frac {m}{2}}\right)+\Theta (m^{2})}


      In the notation of the Master theorem,



      a
      =
      b
      =
      2


      {\displaystyle a=b=2}

      and thus




      log

      b



      a
      =
      1


      {\displaystyle \log _{b}a=1}

      . Clearly,



      Θ
      (

      m

      2


      )
      =
      Ω
      (

      m

      1


      )


      {\displaystyle \Theta (m^{2})=\Omega (m^{1})}

      , so we have




      T
      (
      m
      )
      =
      Θ
      (

      m

      2


      )


      {\displaystyle T(m)=\Theta (m^{2})}


      Above, we pointed out that reducing a Hermitian matrix to tridiagonal form takes





      4
      3



      m

      3




      {\displaystyle {\frac {4}{3}}m^{3}}

      flops. This dwarfs the running time of the divide-and-conquer part, and at this point it is not clear what advantage the divide-and-conquer algorithm offers over the QR algorithm (which also takes



      Θ
      (

      m

      2


      )


      {\displaystyle \Theta (m^{2})}

      flops for tridiagonal matrices).
      The advantage of divide-and-conquer comes when eigenvectors are needed as well. If this is the case, reduction to tridiagonal form takes





      8
      3



      m

      3




      {\displaystyle {\frac {8}{3}}m^{3}}

      , but the second part of the algorithm takes



      Θ
      (

      m

      3


      )


      {\displaystyle \Theta (m^{3})}

      as well. For the QR algorithm with a reasonable target precision, this is




      6

      m

      3




      {\displaystyle \approx 6m^{3}}

      , whereas for divide-and-conquer it is






      4
      3



      m

      3




      {\displaystyle \approx {\frac {4}{3}}m^{3}}

      . The reason for this improvement is that in divide-and-conquer, the



      Θ
      (

      m

      3


      )


      {\displaystyle \Theta (m^{3})}

      part of the algorithm (multiplying



      Q


      {\displaystyle Q}

      matrices) is separate from the iteration, whereas in QR, this must occur in every iterative step. Adding the





      8
      3



      m

      3




      {\displaystyle {\frac {8}{3}}m^{3}}

      flops for the reduction, the total improvement is from




      9

      m

      3




      {\displaystyle \approx 9m^{3}}

      to




      4

      m

      3




      {\displaystyle \approx 4m^{3}}

      flops.
      Practical use of the divide-and-conquer algorithm has shown that in most realistic eigenvalue problems, the algorithm actually does better than this. The reason is that very often the matrices



      Q


      {\displaystyle Q}

      and the vectors



      z


      {\displaystyle z}

      tend to be numerically sparse, meaning that they have many entries with values smaller than the floating point precision, allowing for numerical deflation, i.e. breaking the problem into uncoupled subproblems.


      Variants and implementation


      The algorithm presented here is the simplest version. In many practical implementations, more complicated rank-1 corrections are used to guarantee stability; some variants even use rank-2 corrections.
      There exist specialized root-finding techniques for rational functions that may do better than the Newton-Raphson method in terms of both performance and stability. These can be used to improve the iterative part of the divide-and-conquer algorithm.
      The divide-and-conquer algorithm is readily parallelized, and linear algebra computing packages such as LAPACK contain high-quality parallel implementations.


      References


      Demmel, James W. (1997), Applied Numerical Linear Algebra, Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 0-89871-389-7, MR 1463942.
      Cuppen, J.J.M. (1981). "A Divide and Conquer Method for the Symmetric Tridiagonal Eigenproblem". Numerische Mathematik. 36 (2): 177–195. doi:10.1007/BF01396757. S2CID 120504744.

    Kata Kunci Pencarian: