modified appropriately for points adjacent to the boundary. Let's label the value of \(\upsilon_{i} \) during the \(k\)th iteration with \(\upsilon_{i}^{(k)} \) and state the algorithm more explicitly as
again, modified appropriately for points adjacent to the boundary. The superscripts are there to emphasize the iteration during which a value is updated. In practice, only the values for iteration \(k \) and \(k+1 \) need to be stored. We can also capture the algorithm with a vector and matrix as
We write \(A \) as the difference of its diagonal, \(M = D \text{,}\) and the negative of its off-diagonal part, \(N = D - A \) so that
\begin{equation*}
A = D - ( D - A ) = M - N.
\end{equation*}
In our example, \(M = 4 I \) and \(N = 4 I - A
\text{.}\)
We then notice that
\begin{equation*}
A x = y
\end{equation*}
can be rewritten as
\begin{equation*}
( M - N )x = y
\end{equation*}
or, equivalently,
\begin{equation*}
M x = N x + y.
\end{equation*}
If you think about it carefully, this captures (7.3.1) for our example. Finally,
\begin{equation*}
x = M^{-1} ( N x + y ).
\end{equation*}
If we now let \(x^{(k)} \) be the values of our vector \(x \) in the current step. Then the values after all elements have been updated are given by the vector
\begin{equation*}
x^{(k+1)} = M^{-1} ( N x^{(k)} + y ).
\end{equation*}
All we now need is an initial guess for the solution, \(x^{(0)} \text{,}\) and we are ready to iteratively solve the linear system by computing \(x^{(1)} \text{,}\) \(x^{(2)} \text{,}\) etc., until we (approximately) reach a fixed point where \(x^{(k+1)} = M^{-1} ( N x^{(k)} + y ) \approx
x^{(k)} \text{.}\)
The described method, where \(M \) equals the diagonal of \(A \) and \(N = D - A \text{,}\) is known as the Jacobi iteration.
Remark7.3.1.1.
The important observation is that the computation involves a matrix-vector multiplication with a sparse matrix, \(N
= D - A
\text{,}\) and a solve with a diagonal matrix, \(M = D \text{.}\)