Subsection 9.3.5 Discussion
ΒΆTo summarize this section:
The Power Method finds the eigenvector associated with the largest eigenvalue (in magnitude). It requires a matrix-vector multiplication for each iteration, thus costing approximately \(2 m^2 \) flops per iteration if \(A \) is a dense \(m \times m \) matrix. The convergence is linear.
The Inverse Power Method finds the eigenvector associated with the smallest eigenvalue (in magnitude). It requires the solution of a linear system for each iteration. By performance an LU factorization with partial pivoting, the investment of an initial \(O( m^3 ) \) expense then reduces the cost per iteration to approximately \(2 m^2 \) flops. if \(A \) is a dense \(m \times m \) matrix. The convergence is linear.
The Rayleigh Quotient Iteration finds an eigenvector, but with which eigenvalue it is associated is not clear from the start. It requires the solution of a linear system for each iteration. If computed via an LU factorization with partial pivoting, the cost per iteration is \(O( m^3 ) \) per iteration, if \(A \) is a dense \(m \times m \) matrix. The convergence is quadratic if \(A \) is not Hermitian, and cubic if it is.
The cost of these methods is greatly reduced if the matrix is sparse, in which case each iteration may require as little as \(O( m ) \) per iteration.