Subsection 9.3.3 The Inverse Power Method
¶Homework 9.3.3.1.
Let \(A \in \mathbb C^{m \times m} \) be nonsingular, and \(( \lambda, x ) \) an eigenpair of \(A \)
Which of the follow is an eigenpair of \(A^{-1} \text{:}\)
\(( \lambda, x ) \text{.}\)
\(( \lambda, A^{-1} x ) \text{.}\)
\(( 1/\lambda, A^{-1} x ) \text{.}\)
\(( 1/\lambda, x ) \text{.}\)
\(( 1/\lambda, x ) \text{.}\)
Now justify your answer.
Since \(A x = \lambda x \) and \(A \) is nonsingular, we know that \(A^{-1} \) exists and \(\lambda \neq 0 \text{.}\) Hence
which can be rewritten as
We conclude that \(( 1/\lambda, x ) \) is an eigenpair of \(A^{-1} \text{.}\)
The Power Method homes in on an eigenvector associated with the largest (in magnitude) eigenvalue. The Inverse Power Method homes in on an eigenvector associated with the smallest eigenvalue (in magnitude).
Once again, we assume that a given matrix \(A \in \C^{m \times m} \) is diagonalizable so that there exist matrix \(X \) and diagonal matrix \(\Lambda \) such that \(A = X \Lambda X^{-1} \text{.}\) We further assume that \(\Lambda = {\rm diag}( \lambda_0 , \cdots , \lambda_{m-1} ) \) and
Notice that this means that \(A \) is nonsingular.
Clearly, if
then
Thus, an eigenvector associated with the smallest (in magnitude) eigenvalue of \(A \) is an eigenvector associated with the largest (in magnitude) eigenvalue of \(A^{-1} \text{.}\)
This suggest the following naive iteration (which mirrors the second attempt for the Power Method in Subsubsection 9.3.1.2, but iterating with \(A^{-1} \)):
From the analysis of the convergence of in Subsection 9.3.2 for the Power Method algorithm, we conclude that now
A more practical Inverse Power Method algorithm is given by
We would probably want to factor \(P A = L U \) (LU factorization with partial pivoting) once and solve \(L( U v^{(k+1)} ) = P v^{(k)} \) rather than multiplying with \(A^{-1} \text{.}\)