Subsection 8.3.4 Technical details
¶This unit is probably the most technically difficult unit in the course. We give the details here for completeness, but you will likely live a happy and productive research life without worrying about them too much... The important part is the final observation: that the next search direction computed by the Conjugate Gradient Method is a linear combination of the current residual (the direction of steepest descent) and the last search direction.
Let's look more carefully at \(p^{(k)} \) that satisfies
Notice that
where \(v \) is the orthogonal projection of \(r^{(k)} \) onto \(\Span( Ap^{(0)}, \ldots, Ap^{(k-1)} )\)
which can also be formulated as \(v = A P^{(k-1)} z^{(k)} \text{,}\) where
This can be recognized as a standard linear least squares problem. This allows us to make a few important observations:
Theorem 8.3.4.1.
In Figure 8.3.3.1,
\(P^{(k-1)\,T} r^{(k)} = 0. \)
\(\Span( p^{(0)}, \ldots, p^{(k-1)} ) = \Span( r^{(0)}, \ldots, r^{(k-1)} ) = \Span( b, A b, \ldots, A^{k-1} b ). \)
Proof.
-
Proving that
\begin{equation*} P^{(k-1)\,T} r^{(k)} = 0. \end{equation*}starts by considering that
\begin{equation*} \begin{array}{l} f( P^{(k-1)} y ) \\ ~~~=~~~~ \\ \frac{1}{2} ( P^{(k-1)} y )^T A ( P^{(k-1)} y ) - ( P^{(k-1)} y )^T b \\ ~~~= ~~~~\\ \frac{1}{2} y^T ( P^{(k-1)\,T} A P^{(k-1)} ) y - y^T P^{(k-1)\,T} b \end{array} \end{equation*}is minimized by \(y_0 \) that satisfies
\begin{equation*} ( P^{(k-1)\,T} A P^{(k-1)} ) y_0 = P^{(k-1)\,T} b. \end{equation*}Since \(x^{(k)} \) minimizes
\begin{equation*} \min_{ x \in \Span( p^{(0)}, \ldots , p^{(k-1)} )} f( x ) \end{equation*}we conclude that \(x = P^{(k-1)} y_0 \text{.}\) But then
\begin{equation*} 0 = P^{(k-1)\,T} b - \left( P^{(k-1)\,T} A x^{(k)} \right) = P^{(k-1)\,T} \left( b - A x^{(k)} \right) = P^{(k-1)\,T} r^{(k)}. \end{equation*} -
Show that \(\Span( p^{(0)}, \ldots, p^{(k-1)} ) = \Span( r^{(0)}, \ldots, r^{(k-1)} ) = \Span( b, A b, \ldots, A^{k-1} b ).\)
Proof by induction on \(k \text{.}\)
-
Base case: \(k = 1 \text{.}\)
The result clearly holds since \(p^{(0)} = r^{(0)} = b \text{.}\)
-
Inductive Hypothesis: Assume the result holds for \(n \leq k \text{.}\)
Show that the result holds for \(k = n+1 \text{.}\)
-
If \(k = n + 1 \) then \(r^{(k-1}) = r^{(n)} = r^{(n-1)} - \alpha_{n-1} A p^{(n-1)} \text{.}\) By I.H.
\begin{equation*} r^{(n-1)} \in \Span( b, A b , \ldots , A^{n-1} b ) \end{equation*}and
\begin{equation*} p^{(n-1)} \in \Span( b, A b , \ldots , A^{n-1} b )\text{.} \end{equation*}But then
\begin{equation*} A p^{(n-1)} \in \Span( A b, A^2 b , \ldots , A^{n} b ) \end{equation*}and hence
\begin{equation*} r^{(n)} \in \Span( b, A b, A^2 b , \ldots , A^{n} b ). \end{equation*} -
\(p^{(n)} = r^{(n)} - A P^{(n-1)} y_0 \) and hence
\begin{equation*} p^{(n)} \in \Span( b, A b, A^2 b , \ldots , A^{n} b ) \end{equation*}since
\begin{equation*} r^{(n)} \in \Span( b, A b, A^2 b , \ldots , A^{n} b ) \end{equation*}and
\begin{equation*} A P^{n-1} y_0 \in \Span( A b, A^2 b , \ldots , A^{n} b ). \end{equation*}
-
We complete the inductive step by noting that all three subspaces have the same dimension and hence must be the same subspace.
By the Principle of Mathematical Induction, the result holds.
-
Definition 8.3.4.2. Krylov subspace.
The subspace
is known as the order-k Krylov subspace.
The next technical detail regards the residuals that are computed by the Conjugate Gradient Method. They are mutually orthogonal, and hence we, once again, conclude that the method must compute the solution (in exact arithmetic) in at most \(n \) iterations. It will also play an important role in reducing the number of matrix-vector multiplications needed to implement the final version of the Conjugate Gradient Method.
Theorem 8.3.4.3.
The residual vectors \(r^{(k)} \) are mutually orthogonal.
Proof.
In Theorem 8.3.4.1 we established that
and hence
for \(j \lt k \text{.}\) Hence \(r^{(j)} = P^{(k-1)} t^{(j)} \) for some vector \(t^{(j)} \in \R^k \text{.}\) Then
Since this holds for all \(k \) and \(j \lt k \text{,}\) the desired result is established.
Next comes the most important result. We established that
where \(z^{(k)} \) solves
What we are going to show is that in fact the next search direction equals a linear combination of the current residual and the previous search direction. For \(k \geq 1 \text{,}\) the search directions generated by the Conjugate Gradient Method satisfy for some constant \(\gamma_k \text{.}\)
Theorem 8.3.4.4.
Proof.
This proof has a lot of very technical details. No harm done if you only pay cursory attention to those details.
Partition \(z^{(k-1)} = \left( \begin{array}{c} z_0 \\ \zeta_1 \end{array} \right) \) and recall that \(r^{(k)} = r^{(k-1)} - \gamma_{k-1} A p^{(k-1)} \) so that
We notice that \(r^{(k)} \) and \(s^{(k)} \) are orthogonal. Hence
and minimizing \(p^{(k)} \) means minimizing the two separate parts. Since \(r^{(k)} \) is fixed, this means minimizing \(\| s^{(k)} \|_2^2 \text{.}\) An examination of \(s^{(k)} \) exposes that
where \(w_0 = -( \alpha_{k-1} / \zeta_1 ) z_0 \text{.}\) We recall that
and hence we conclude that \(s_k \) is a vector the direction of \(p^{(k-1)} \text{.}\) Since we are only interested in the direction of \(p^{(k)} \text{,}\) \(\frac{\zeta_1}{\alpha_{k-1}} \) is not relevant. The upshot of this lengthy analysis is that
only the last search direction is needed to compute the current one. This reduces the cost of computing the current search direction and means we don't have to store all previous ones.
Remark 8.3.4.5.
This is a very, very, very big deal...