Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Skip to main content

Subsection 8.3.3 Conjugate Gradient Method Basics

fit width

The idea behind the Conjugate Gradient Method is that in the current iteration we have an approximation, x(k) to the solution to Ax=b. By construction, since x(0)=0,

x(k)=Ξ±0p(0)+β‹―+Ξ±kβˆ’1p(kβˆ’1).

Also, the residual

r(k)   =    bβˆ’Ax(k)   =    bβˆ’A(Ξ±0p(0)+β‹―+Ξ±kβˆ’1p(kβˆ’1))   =    bβˆ’Ξ±0Ap(0)βˆ’β‹―βˆ’Ξ±kβˆ’1Ap(kβˆ’1)   =    r(kβˆ’1)βˆ’Ξ±kβˆ’1Ap(kβˆ’1).

If r(k)=0, then we know that x(k) solves Ax=b, and we are done.

Assume that r(k)β‰ 0. The question now is "How should we construct a new p(k) that is A-conjugate to the previous search directions and so that p(k)Tr(k)β‰ 0?" Here are some thoughts:

  • We like the direction of steepest descent, r(k)=bβˆ’Ax(k), because it is the direction in which f(x) decreases most quickly.

  • Let us chose p(k) to be the vector that is A-conjugate to p(0),…,p(kβˆ’1) and closest to the direction of steepest descent, r(k):

    β€–

This yields the algorithm in Figure 8.3.3.1.

\begin{equation*} \begin{array}{l} {\bf Given:~} A, b \\ x^{(0)} := 0 \\ r^{(0)} := b \\ k := 0 \\ {\bf while~} r^{(k)} \neq 0 \\ ~~~ {\bf if~} k = 0 \\ ~~~ ~~~ p^{(k)} = r^{(0)} \\ ~~~ {\bf else} \\ ~~~ ~~~ p^{(k)} \mbox{ minimizes } \min_{p \perp \Span( Ap^{(0)}, \ldots, Ap^{(k-1)} )} \| r^{(k)} - p \|_2 \\ ~~~ {\bf endif} \\ ~~~ \alpha_k := \frac{p^{(k)\,T} r^{(k)}}{p^{(k)\,T} A p^{(k)}} \\ ~~~ x^{(k+1)} := x^{(k)} + \alpha_k p^{(k)} \\ ~~~ r^{(k+1)} := r^{(k)} - \alpha_k A p^{(k)} \\ ~~~ k := k + 1 \\ {\bf endwhile} \end{array} \end{equation*}
Figure 8.3.3.1. Basic Conjugate Gradient Method.