Subsection 7.2.1 Banded matrices
¶It is tempting to simply use a dense linear solver to compute the solution to \(A x = b \) via, for example, LU or Cholesky factorization, even when \(A \) is sparse. This would require \(O( n^3 ) \) operations, where \(n \) equals the size of matrix \(A \text{.}\) What we see in this unit is that we can take advantage of a "banded" structure in the matrix to greatly reduce the computational cost.
Homework 7.2.1.1.
The 1D equivalent of the example from Subsection 7.1.1 is given by the tridiagonal linear system
Prove that this linear system is nonsingular.
Consider \(A x = 0 \text{.}\) We need to prove that \(x = 0 \text{.}\) If you instead consider the equivalent problem
that introduces two extra variables \(\chi_{-1} = 0 \) and \(\chi_n = 0 \text{,}\) the problem for all \(\chi_i \text{,}\) \(0 \leq i \lt n \text{,}\) becomes
or, equivalently,
Reason through what would happen if any \(\chi_i \) is not equal to zero.
Building on the hint: Let's say that \(\chi_i \neq 0 \) while \(\chi_{-1}, \ldots , \chi_{i-1} \) are. Then
and hence
Next,
and hence
Continuing this argument, the solution to the recurrence relation is \(\chi_n = ( n-i + 1) \chi_i \) and you find that \(\chi_n \neq 0\) which is a contradiction.
This course covers topics in a "circular" way, where sometimes we introduce and use results that we won't formally cover until later in the course. Here is one such situation. In a later week you will prove these relevant results involving eigenvalues:
A symmetric matrix is symmetric positive definite (SPD) if and only if its eigenvalues are positive.
The Gershgorin Disk Theorem tells us that the matrix in (7.2.1) has nonnegative eigenvalues.
A matrix is singular if and only if it has zero as an eigenvalue.
These insights, together with Homework 7.2.1.1, tell us that the matrix in (7.2.1) is SPD.
Homework 7.2.1.2.
Compute the Cholesky factor of
Homework 7.2.1.3.
Let \(A \in \mathbb R^{n \times n} \) be tridiagonal and SPD so that
Propose a Cholesky factorization algorithm that exploits the structure of this matrix.
What is the cost? (Count square roots, divides, multiplies, and subtractions.)
What would have been the (approximate) cost if we had not taken advantage of the tridiagonal structure?
-
If you play with a few smaller examples, you can conjecture that the Cholesky factor of (7.2.2) is a bidiagonal matrix (the main diagonal plus the first subdiagonal). Thus, \(A = L L^T \) translates to
\begin{equation*} \begin{array}{l} \left( \begin{array}{c c c c c c c} \alpha_{0,0} \amp \alpha_{1,0} \amp \amp \amp \\ \alpha_{1,0} \amp \alpha_{1,1} \amp \alpha_{2,1} \amp \amp \\ \amp \ddots \amp \ddots \amp \ddots \amp \\ \amp \amp \alpha_{n-2,n-3} \amp \alpha_{n-2,n-2} \amp \alpha_{n-1,n-2} \\ \amp \amp \amp \alpha_{n-1,n-2} \amp \alpha_{n-1,n-1} \end{array} \right) \\ = \left( \begin{array}{c c c c c c c} \lambda_{0,0} \amp \amp \amp \amp \\ \lambda_{1,0} \amp \lambda_{1,1} \amp \amp \amp \\ \amp \ddots \amp \ddots \amp \amp \\ \amp \amp \lambda_{n-2,n-3} \amp \lambda_{n-2,n-2} \amp \\ \amp \amp \amp \lambda_{n-1,n-2} \amp \lambda_{n-1,n-1} \end{array} \right) \left( \begin{array}{c c c c c c c} \lambda_{0,0} \amp \lambda_{1,0} \amp \amp \amp \\ \amp \lambda_{1,1} \amp \lambda_{2,1} \amp \amp \\ \amp \amp \ddots \amp \ddots \amp \\ \amp \amp \amp \lambda_{n-2,n-2} \amp \lambda_{n-1,n-2} \\ \amp \amp \amp \amp \lambda_{n-1,n-1} \end{array} \right) \\ = \left( \begin{array}{c c c c c c c} \lambda_{0,0} \lambda_{0,0} \amp \lambda_{0,0} \lambda_{1,0} \amp \amp \amp \\ \lambda_{1,0} \lambda_{0,0} \amp \lambda_{1,0} \lambda_{1,0} + \lambda_{1,1} \lambda_{1,1} \amp \lambda_{1,1} \lambda_{2,1} \amp \amp \\ \amp \lambda_{21} \lambda_{11} \amp \ddots \amp \ddots \amp \\ \amp \amp \ddots \amp \star \star \amp \lambda_{n-2,n-2} \lambda_{n-1,n-2} \\ \amp \amp \amp \lambda_{n-1,n-2} \lambda_{n-2,n-2} \amp \begin{array}[t]{l} \lambda_{n-1,n-2} \lambda_{n-1,n-2} \\ ~~~~~~~+ \lambda_{n-1,n-1} \lambda_{n-1,n-1} \end{array} \end{array} \right) , \end{array} \end{equation*}where \(\star\star = \lambda_{n-3,n-2} \lambda_{n-3,n-2} + \lambda_{n-2,n-2} \lambda_{n-2,n-2} \text{.}\) With this insight, the algorithm that overwrites \(A \) with its Cholesky factor is given by
\begin{equation*} \begin{array}{l} {\bf for~} i = 0, \ldots, n-2 \\ ~~~ \alpha_{i,i} := \sqrt{ \alpha_{i,i} } \\ ~~~ \alpha_{i+1,i} := \alpha_{i+1,i} / \alpha_{i,i} \\ ~~~ \alpha_{i+1,i+1} := \alpha_{i+1,i+1} - \alpha_{i+1,i} \alpha_{i+1,i} \\ {\bf endfor} \\ \alpha_{n-1,n-1} := \sqrt{ \alpha_{n-1,n-1} } \\ \end{array} \end{equation*} A cost analysis shows that this requires \(n \) square roots, \(n-1 \) divides, \(n-1 \) multiplies, and \(n-1 \) subtracts.
The cost, had we not taken advantage of the special structure, would have been (approximately) \(\frac{1}{3} n^3 \text{.}\)
Homework 7.2.1.4.
Propose an algorithm for overwriting \(y \) with the solution to \(A x = y \) for the SPD matrix in Homework 7.2.1.3.
Use the algorithm from Homework 7.2.1.3 to overwrite \(A \) with its Cholesky factor.
-
Since \(A = L L^T \text{,}\) we need to solve \(L z = y \) and then \(L^T x = z \text{.}\)
-
Overwriting \(y \) with the solution of \(L z = y \) (forward substitution) is accomplished by the following algorithm (here \(L \) had overwritten \(A \)):
\begin{equation*} \begin{array}{l} {\bf for~} i = 0, \ldots, n-2 \\ ~~~ \psi_i := \psi_i / \alpha_{i,i} \\ ~~~ \psi_{i+1} := \psi_{i+1} - \alpha_{i+1,i} \psi_i \\ {\bf endfor} \\ \psi_{n-1} := \psi_{n-1} / \alpha_{n-1,n-1} \end{array} \end{equation*} -
Overwriting \(y \) with the solution of \(L x = z \) (where \(z \) has overwritten \(y \) (back substitution) is accomplished by the following algorithm (here \(L \) had overwritten \(A \)):
\begin{equation*} \begin{array}{l} {\bf for~} i = n-1, \ldots, 1 \\ ~~~ \psi_i := \psi_i / \alpha_{i,i} \\ ~~~ \psi_{i-1} := \psi_{i-1} - \alpha_{i,i-1} \psi_i \\ {\bf endfor} \\ \psi_{0} := \psi_{0} / \alpha_{0,0} \end{array} \end{equation*}
-
The last exercises illustrate how special structure (in terms of patterns of zeroes and nonzeroes) can often be exploited to reduce the cost of factoring a matrix and solving a linear system.
The bandwidth of a matrix is defined as the smallest integer \(b \) such that all elements on the \(j \)th superdiagonal and subdiagonal of the matrix equal zero if \(j \gt b \text{.}\)
A diagonal matrix has bandwidth \(1 \text{.}\)
A tridiagonal matrix has bandwith \(2 \text{.}\)
And so forth.
Let's see how to take advantage of the zeroes in a matrix with bandwidth \(b \text{,}\) focusing on SPD matrices.
Definition 7.2.1.1.
The half-band width of a symmetric matrix equals the number of subdiagonals beyond which all the matrix contains only zeroes. For example, a diagonal matrix has half-band width of zero and a tridiagonal matrix has a half-band width of one.
Homework 7.2.1.5.
Assume the SPD matrix \(A \in \mathbb R^{m \times m}\) has a bandwidth of \(b \text{.}\) Propose a modification of the right-looking Cholesky factorization from Figure 5.4.3.1
See the below video.
Ponder This 7.2.1.6.
Propose a modification of the FLAME notation that allows one to elegantly express the algorithm you proposed for Homework 7.2.1.5
Ponder This 7.2.1.7.
Another way of looking at an SPD matrix \(A \in \Rnxn \) with bandwidth \(b \) is to block it
where, \(A_{i,j} \in \R^{b \times b} \) and for simplicity we assume that \(n \) is a multiple of \(b \text{.}\) Propose an algorithm for computing its Cholesky factorization that exploits this block structure. What special structure do matrices \(A_{i+1,i} \) have? Can you take advantage of this structure?
Analyze the cost of your proposed algorithm.