Subsection 8.3.3 Conjugate Gradient Method Basics
ΒΆfit widthThe idea behind the Conjugate Gradient Method is that in the current iteration we have an approximation, x(k) to the solution to Ax=b. By construction, since x(0)=0,
x(k)=Ξ±0p(0)+β―+Ξ±kβ1p(kβ1).
Also, the residual
r(k) = bβAx(k) = bβA(Ξ±0p(0)+β―+Ξ±kβ1p(kβ1)) = bβΞ±0Ap(0)ββ―βΞ±kβ1Ap(kβ1) = r(kβ1)βΞ±kβ1Ap(kβ1).
If r(k)=0, then we know that x(k) solves Ax=b, and we are done.
Assume that r(k)β 0. The question now is "How should we construct a new p(k) that is A-conjugate to the previous search directions and so that p(k)Tr(k)β 0?" Here are some thoughts:
We like the direction of steepest descent, r(k)=bβAx(k), because it is the direction in which f(x) decreases most quickly.
-
Let us chose p(k) to be the vector that is A-conjugate to p(0),β¦,p(kβ1) and closest to the direction of steepest descent, r(k):
β
\begin{equation*}
\begin{array}{l}
{\bf Given:~} A, b \\
x^{(0)} := 0
\\
r^{(0)} := b \\
k := 0 \\
{\bf while~} r^{(k)} \neq 0 \\
~~~ {\bf if~} k = 0 \\
~~~ ~~~ p^{(k)} = r^{(0)} \\
~~~ {\bf else} \\
~~~ ~~~ p^{(k)} \mbox{ minimizes }
\min_{p \perp \Span( Ap^{(0)}, \ldots, Ap^{(k-1)}
)} \| r^{(k)} - p \|_2
\\
~~~ {\bf endif} \\
~~~ \alpha_k := \frac{p^{(k)\,T} r^{(k)}}{p^{(k)\,T}
A p^{(k)}}
\\
~~~ x^{(k+1)} := x^{(k)} + \alpha_k p^{(k)}
\\
~~~ r^{(k+1)} := r^{(k)} - \alpha_k A p^{(k)} \\
~~~ k := k + 1 \\
{\bf endwhile}
\end{array}
\end{equation*}